Way back in 1985, I started my “professional” career as a software guy as a developer for the brand new Atari ST computer.  After a few years as a 3rd party developer, I was hired by Atari to provide developer support to ST developers in the USA. 

Part of what made me a good choice for that role was that I had a really good in-depth understanding of GEM.   For example, when I worked on the WordUp word processor for Neocept, I wrote more than a dozen GDOS printer drivers for various printers, including color, that Atari’s drivers didn’t support.  Quite a lot of that information is still burned deep into my brain, even though it’s been many years since I actually wrote any code for the Atari.

These days, when something reminds me of GEM for some reason, the main things that come to mind are the various problems, glitches, and workarounds for various things.  This article is going to be mainly about the various design flaws in GEM, their workarounds, and how they impacted development.

GEM – The Origins

In the mid 80’s, just as computers were starting to break out of their character-based screens into more graphically oriented environments, Digital Research came out with GEM, or the Graphics Environment Manager.  The idea was to offer a graphic-based environment for applications that could compete with the brand new Macintosh computer, and Microsoft’s new Windows product.

GEM started life in the late 70’s and early 80’s as the GSX graphics library.  This was a library that could run on different platforms and provide a common API for applications to use, regardless of the underlying graphics hardware.  This was a pretty big deal at the time, since the standard for graphics programming was to write directly to the video card’s registers.  And since every video card did things a little differently, it often meant that a given application would only support one or two specific video cards.  The GSX library would later become the basis of the VDI portion of GEM, responsible for graphics device management and rendering.

GEM was basically a marriage of two separate APIs.  The VDI (Virtual Device Interface) was responsible for all interaction with graphics devices of any sort, while the AES (Application Environment Services) was responsible for creating and managing windows, menu bars, dialog boxes, and all the other basic GUI components that an application might use.

GEM was first demoed running on the IBM PC with an 8086 processor, running on top of MSDOS.  However, various references in the documentation to the Motorola 68000 processor and integration with their own CP/M-68K operating system as the host make it seem clear that that DR intended GEM to be available for multiple processors at a relatively early stage of development.

Ironically, the PC version of GEM never really took off.  Other than being bundled as a runtime for Ventura Publisher, there were never any major applications written for the PC version.  Overall, it was the Atari ST series where GEM found its real home.

Overview of GEM VDI

In case you never programmed anything for GEM VDI, let me give you a brief overview of how it worked.  The first thing you do in order to use a device is open a workstation.  This returns a variety of information about the device’s capabilities.  Another API call you can do once the workstation has been opened will give you additional information about device capability.  Once you have an open workstation, you can execute the appropriate VDI calls to draw graphics onto the device’s raster area.

Most devices aren’t meant to be shared so you can only have one workstation open at a time.  However, in order to support multitasking with multiple GEM applications and desk accessories running together, you need to be able to share the display.  Therefore, the VDI supports the notion of opening a “virtual” workstation which is basically a context for the underlying physical workstation. 

GEM VDI Design Issues

The VDI has a number of huge design flaws that are easily recognized today.  I’m generally not talking about missing features, either.  I’m sure we could come up with a long list of things that might have been added to the VDI given enough time and resources.  I’m talking about flaws in the intended functionality.  Many of these issues were common cause for complaint from day one.

Also, let me be clear about this: when I suggest some fix to one of these flaws, I’m not saying someone should find the sources and do it now.  I’m saying it should have been done back in 1983 or 1984 when Digital Research was creating GEM in the first place.  Any of these flaws should have been noticeable at the time…  most of them are simply a matter of short-sightedness.

No Device Enumeration

Until the release of FSMGDOS in 1991, 6 years after the ST’s initial release, there was no mechanism for an application to find out what GEM devices were available, other than going through the process of attempting to open each possible device number and seeing what happened.  This was slow and inefficient, but the real problem underneath it all is a bit more subtle.  Even once FSMGDOS hit the scene, the new vqt_devinfo() function still required you to test every possible device ID.

The fix here would have been simple.  There should have been a VDI call that enumerated available devices.  Something like this:

typedef struct
{
/* defined in VDI.H - various bits of device info */
} VDIDeviceInfo;

VDIDeviceInfo deviceinfo[100];
numdevices = 0;
dev_id = 0;
while( dev_id = vq_device(dev_id, &deviceinfo[numdevices++]) );

The idea here is that the vq_device() function would return information about the next available device with a number higher than the dev_id parameter passed into it.   So if you pass in zero, it gives you info on device #1 and returns 1 as a result.  When it returns zero, you’ve reached the end of the list.

Device ID Assignments

Related to the basic problem of device enumeration is the whole way in which device IDs were handled overall.  GEM graphics devices are managed by a configuration text file named assign.sys that lived in the root directory of your boot volume.  This file would look something like this:

PATH=C:\SYS\GDOS
01 screen.sys
scrfont1.fnt
21 slm.sys
font1.fnt
font2.fnt
font3.fnt

The first line specifies the path where device driver files and device-specific bitmapped fonts were located.  The rest of the file specifies the available devices and the fonts that go with them.  For example, device 21 is the “slm.sys” driver, and “font1.fnt”, “font2.fnt” and “font3.fnt” are bitmapped font files for that device.

The device id number is not completely arbitrary.  There are different ranges of values for different device types.  For example, devices 1-10 were considered to be screen devices, 11-20 were considered to be pen plotter devices, 21-30 were printer devices, and so forth.  Oddly complicating things in a few places is Digital Research’s decision to mix input devices like touch tablets together with output devices like screens and printers.

The way device IDs worked was mainly a contributing factor in other situations, rather than a problem in its own right.  For example, because there was no easy way to enumerate available devices, many applications simply made the assumption that the printer was always going to be device 21 and that the metafile driver was device 31.  And in most cases, that’s all they would support.

The bigger problem, however, was that while the device ID assignments were mostly purely arbitrary, they were anything but arbitrary for the display screen.

Getting The Screen Device ID

Remember earlier when I explained how applications would open a “virtual” workstation for the screen?  Well, in order to do that, you have to know the handle of the physical workstation.  That’s something you get from the GEM AES function graf_handle().  One would think, since the physical workstation is already open, that you shouldn’t need to tell VDI the device ID, right?  Wrong.  Even though the physical workstation for the screen device is already opened by the GEM AES, you still need to pass the device ID number as one of the parameters when you open a virtual workstation.  So how do you get the device ID for the screen device that’s already open?  Well, there really isn’t a good answer to that question, and therein lies the chocolaty center of this gooey mess. 

On the Atari, the recommended method was to call the BIOS function GetRez() and add 2 to the returned value.  The first problem with this idea is there is no direct correlation between that value and anything like the screen resolution or number of colors available.   And even if there was some correlation, there are far more different screen modes than you can fit in the device ID range of 1-10.

Furthermore, this method only really worked for those video modes supported by the built-in hardware.  Add-on cards needed to not only have a driver, they also needed to install a patch to make GetRez() return the desired value when other video modes were used.

This pissed me off then, in large part because developers didn’t univerally follow the recommended method and their code broke when Atari or third parties introduced new hardware.  In fact, the very first article that I wrote for the ATARI.RSC Developer newsletter after I started at Atari was about this very subject. 

Looking back, the thing that pisses me off the most about this is the fact that I can think of at least three really easy fixes.  Any one of them would have avoided the situation, but all three are things that probably should have been part of GEM from day one.

The first, and most obvious, is that opening a virtual workstation shouldn’t require a device ID as part of the input.  The VDI should be able to figure it out from the physical workstation handle.  Seriously… what’s the point?  The device is already open!

Another option would have been adding a single line of code to the GEM AES function graf_handle() to make it also return the device ID number, rather than just the handle of the physical workstation.  If you’re going to insist on passing it as a parameter to open a virtual workstation, this is what makes sense.  After all, this function’s whole purpose is to provide you with information about the physical workstation!

Lastly, and independent of the other two ideas, there probably should have been a VDI function that would accept a workstation handle as a parameter and return information about the corresponding physical workstation, including the device ID.  This arguably comes under the heading of “new” features, but I prefer to think that it’s an essential yet “missing” feature.

Palette-Based Graphics

Perhaps the biggest flaws about GEM VDI are based in the fact that that the VDI is wrapped around the idea of a palette-based raster area.  This is where each “pixel” of the raster is an index into a table containing the actual color values that are shown.  Moreover, it’s not even a generic bit-packed raster.  The native bitmap format understood by GEM VDI is actually the same multiple bitplane format as what most VGA video cards used. 

Considering that the goal of the VDI was to create an abstract, virtual graphics device that could be mirrored onto an arbitrary actual piece of hardware, this is hard to forgive.

At the very least, the VDI should have acknowledged the idea of raster formats where the pixel value directly represents the color being displayed.  I’ve often wondered if this failure represents short-sightedness or a lack of development resources.

One might make the argument that “true color” video cards were still a few years away from common usage, and that’s undoubtedly part of the original thinking, but the problem is that this affects more than just the display screen.  Many other devices don’t use palette-based graphics.  For example, most color printers that were available back then had a selection of fixed, unchangeable colors.

Inefficient Device Attribute Management

Quite a lot of the VDI library consists of functions to set attributes like line thickness, line color, pattern, fill style, fill color, etc.  There’s an equally impressive list of functions whose purpose is to retrieve the current state of these attributes.

For the most part, these attributes are set one at a time.  That is, to set up the attributes for drawing a red box with a green hatched fill pattern, you have to do the following:

vsl_type( screenhandle, 0 );    // set solid line style
vsl_width( screenhandle, 3 );  // set line thickness of 3 pixels
vsl_color( screenhandle, linecolor );
vsf_color( screenhandle, fillcolor );
vsf_interior( screenhandle, 3 );
vsf_style( screenhandle, 3 );

By the way, we’re making the assumption here that the linecolor and fillcolor variables have already been set to values that represent red and green colors in the current palette.  That’s not necessarily a trivial assumption but let’s keep this example modest.

At first glance you might say, “Well, six lines of code… I see how it could be improved but that’s really not that terrible.

It really is… if you know how GEM VDI calls work, you’ll recognize how it’s horribly, horribly bad in a way that makes you want to kill small animals if you think about it too much.  Each one of those functions is ultimately doing nothing more than storing a single 16-bit value into a table, but there’s so much overhead involved in making even a simple VDI function call that it takes a few hundred cycles of processor time for each of these calls.

First, the C compiler has to push the parameters onto the stack and call the function binding.  The function binding reads the parameters off the stack and then saves them into the GEM VDI parameter arrays.  Then it loads up the address of the parameter arrays table and executes the 68000 processor’s trap #2 function.  This involved a context switch from user mode to supervisor mode, meaning that all of the processor’s registers and flags had to be saved on entry and restored on exit.  From there, GEM picks up the parameters and grabs the appropriate function pointer out of a table, and then passes control to that function.  At that point, the very, very special 16-bit value we cared about in the first place is lovingly deposited into the appropriate location within the table that the VDI has allocated for that particular workstation handle.  Then the function exits and starts making its way back up to your code. Along the way, there is much saving and restoring of 32-bit registers.  Those are uncached reads and writes on most ST systems, by the way.

The bottom line is that for things like this, GEM was simply horribly inefficient. And this could have been quite easily avoided, is the really bizzare part.

The way that 68000-based programs make GEM VDI calls is to load a magic code into the 68000’s d0 register, and the address of the VDI parameter block in the 68000’s d1 register, and then make a trap #2 call.  The parameter block is simply a list of pointers to the 5 arrays that GEM VDI uses to pass information back and forth with the application.  My idea is simply to add another pointer to the VDI parameter block, pointing to a structure that maintains all of the current drawing attributes of the workstation, including the handle and the device ID.

Suppose that opening a physical workstation (for device #21 in this example) looked something like this:

int v_opnwk( int devID, VDIWorkstation *dev, VDIContext *context );

VDIWorkstation printerDevice;
int handle = v_opnwk( 21, &printerDevice, v_getcontext(0L) );

Opening a virtual workstation is similar, except that we specify the handle for the open physical workstation instead of the device ID:

int v_opnwk( int physHandle, VDIWorkstation *dev, VDIContext *context );

VDIWorkstation screenDevice;
int handle = v_opnvwk( phys_handle, &screenDevice, v_getcontext(0L) ); 

Thereafter, VDI calls look much the same, except that instead of passing the handle of your workstation as a parameter, you pass a pointer to the desired VDIWorkstation structure:

v_ellipse( &screendevice, x, y, xrad, yrad );

instead of:

v_ellipse( handle, x, y, xrad, yrad );

The VDIWorkstation structure would look something like this:

typedef struct {
         VDIContext *ws;
         int *control;
         int *intin;
         int *ptsin;
         int *intout;
         int *ptsout;
} VDIWorkstation;

typedef struct {
         int contextSize;
         int handle;
         int deviceID;
         int lineType;
         int lineWidth;
         int lineColor;
     /* other various attribute fields listed here */
} VDIContext;

The heavy lifting is really done by the addition of the VDIContext structure. The first parameter would be a size field so the structure could be extended as needed.  And a new function called v_getcontext() would be used to allocate and initialize a context structure that resides in the application’s memory space.

With this setup, you would be able to change simple things like drawing attributes by direct manipulation of that context structure.  Let’s return to the example of setting up the attributes to draw a red rectangle with green hatch fill pattern.  Instead of the lines of code we saw earlier, we could instead have something like this:

screenDevice.ws->lineType = 0;  // set solid line style
screenDevice.ws->lineWidth = 3;  // set line thickness of 3 pixels
screenDevice.ws->lineColor = linecolor;
screenDevice.ws->fillColor = fillcolor;
screenDevice.ws->fillInterior = 3;
screenDevice.ws->fillStyle = 3;

This requires no function calls, no 68000 trap #2 call, no pushing or popping a ton of registers onto and off of the stack.  This entire block of code would take fewer cycles than just one line of code from the first example, by a pretty big margin.

The one thing that this does impact is the creation of metafiles, since attribute setting would no longer generate entries in the output file.  But that is easily solved by creating a new function, let’s call it vm_updatecontext(), which would simply take all the parameters from the context structure and output them to the metafile all at once.

These are relatively simple changes from an implementation standpoint, but they would have had a significant impact on the performance of GEM on the 68000, and I suspect the difference would be comparable on the 808x processors as well.

More coming in part 2

In part 2 of this, written whenever I get around to it, I’ll talk more about the VDI including more stuff about true color support, and outline font support — too little, too late?

May 4th, 2009 by Mike Fulton
Categories: Apple, iPhone, Macintosh, Tech

Once upon a time, Steve Jobs was the leader of a company called Apple.  Apple was known for being a technology leader, and their latest products were the envy of the industry.  Sadly, though, Apple’s sales figures didn’t seem to be able to keep pace with their reputation.  The board of directors of Apple, thinking that another style of management might be the way to go, decided that they’d had enough of Steve and handed him his walking papers.  The year was 1985.

Steve’s response to the situation was to start another computer company, called NeXT.  The Apple Macintosh was supposed to be the “computer for the rest of us” but with NeXT, it seemed Job’s goal was to create the “computer for the best of us“.  Largely inspired by his experience with getting the Macintosh into the education market, the NeXT Computer was going to be a powerful workstation designed to meet the needs of the scientific and higher educational community.  At the heart of this new computer was going to be NeXTStep, an object-oriented multi-tasking operating system that included tightly integrated development tools to aid users in quickly creating custom applications.

NeXTStep’s Language Of Choice

At the heart of NeXTStep was a fairly new programming language known as Objective C.  It was basically an extension of the C language to add Smalltalk-style messaging and other OOP features.  Conceptually it’s not too far off from where C++ was at the time, but the syntax is fairly different.  However, that simply didn’t matter at the time because most programmers hadn’t done much, if anything, with C++.

In 1985, any sort of object oriented programming was a relatively new thing to most programmers.  Modern languages like Java and C# were still years in the future, and C++ was still largely an experiment, with no standard in place and drastic differences from one implementation to the next.  In fact, most C++ solutions at the time were based on AT&T’s CFront program, which converted C++ code into standard C code that would then be compiled by a standard compiler.  It would be a few years yet before native C++ compilers became commonplace.

There were other OOP languages around, like Smalltalk or Lisp, but they were largely considered acedemic languages, not something you’d use to create shrink-wrapped products.

Since there simply wasn’t any better solution, the choice of Objective C for NeXTStep was completely reasonable at the time.

What Happened NeXT

The first version of NeXTStep was released in Sept. 1989.  Over the next few years, the NeXT computer and NeXTStep made a number of headlines and gained a lot of respect in the industry, but failed to become a major player in terms of sales.  In late 1996, NeXT had just teamed up with Sun Computer to create a cross-platform version called OpenStep, but before that really took off, something else happened.

In 1996, Apple was floundering.  Their stock price was down.  They’d had layoffs.  They had no clear plan for the future in place, and they were in serious danger of losing their place as the master of the graphic user interface.  Microsoft had just released Windows 95, which was a huge leap forward from Windows 3.1 in virtually every way, and PC video cards offering 24-bit and 32-bit color modes had become easily affordable.

Apple CEO Gil Amelio was fairly sure that updating the Mac to use some sort of object-oriented operating system was key to Apple’s future success, but Apple’s internal development had thus far failed to pay off.  Likewise Apple’s investment in Taligent, a company formed in partnership with IBM for the sole purpose of developing an object oriented operating system.  But then Amelio struck a bargain to purchase NeXT Computer and the NeXTStep operating system, bringing NeXT CEO Steve Jobs back into the fold, first as an advisor and then as CEO several months later when Amelio was shown the door.

It took Apple nearly 4 years to integrate their existing operating system with the NeXTStep tools and libraries, but ultimately NeXTStep formed the basis of the new Macintosh OS X operating system, released in March 2001.

Mac Development Tool History

When the Macintosh was first released in early 1984, you pretty much used either 68000 assembly language or Pascal to create programs.  Pascal had always been a popular language with the Apple crowd.  Apple had a set of development tools known as the Macintosh Programmer’s Workshop, which was essentially a GUI interface wrapper for a variety of commandline oriented tools, including the 68000 assembler and the Pascal language compiler.

It didn’t take long for the C language became available for the Mac.  Apple released a version for MPW, but it really took off with the release of LIGHTSPEED C (later renamed to THINK C), which had a GUI IDE of the sort that would be completely recognizable as such even today, almost 25 years later.  Think’s compiler quickly became the defacto standard development environment for the Mac.  Support for C++ would be added in 1993 with version 6.0, after the product was acquired by Symantec.

Unfortunately, when Apple made the transition from the Motorola 680×0 processor family to the PowerPC processor in 1994 & 1995, Symantec C/C++ failed to keep pace.  It wasn’t until version 8, released in 1997, that their compiler was able to generate native PowerPC code. 

Fortunately, a new player in the game appeared to save the day.  When Symantec bought out Think, some members of the Think C development team started a new company called Metrowerks.  While Symantec was struggling to bring out a PowerPC compiler, Metrowerks released their new CodeWarrior C/C++ environment.  In many ways, Codewarrior was like an upgrade to the Symantec product, and it quickly supplanted Symantec among developers.  Codewarrior would remain at the top of the heap until Apple released OS X.

The NeXT Development Tool

When Apple released Mac OS X in 2001, there were two big paradigm shifts for developers.  The first was that Apple now included their development tools with the operating system, at no additional charge.  After nearly two decades of charging premium prices for their tools, this was a big change.  Plus, the new XCode environment was an actual IDE, unlike the old Macintosh Programmer’s Workshop environment, with support for Objective C, C, C++, and Java.

The second paradigm shift was that everything you knew about programing the Mac was now old news.  You could continue to use an existing C/C++ codebase with the new Carbon libraries providing a bridge to the new OS, but this did not allow you to use the new tools such as the Interface Builder.  If you wanted to take full advantage of the new tools Apple and the Cocoa libraries, you needed to use Objective C instead of the familiar C or C++.

Objectionable C

I had been a Mac programmer since getting my first machine in 1986, and when Apple released Mac OS X in 2001, I was fully expecting to continue that tradition.  However, while I had no problems whatsoever with the idea of learning a new set of API calls, or learning new tools, I saw no good reason why it should be necessary to learn a new programming language.  Still, at one time in my younger days I had enjoyed experimenting with different programming languages, so I figured why not give Objective C a try?

Upon doing so, my first thought was, this was an UGLY language.  My second thought was, why did they change certain bits of syntax around for no good reason?  There were things where the old-style C syntax would have gotten the job done, but they changed it anyway.  The third thing that occurred to me was that this was a REALLY UGLY language.

After a few brief experiments, I pretty much stopped playing around with Cocoa and Objective C.  I started playing around with Carbon.  My first project was to rebuild an old project done in C++.  But the first thing I ran into was frustration that I couldn’t use the new tools like the Interface Builder.  It wasn’t too long before I decided I wasn’t getting paid enough to deal with all this BS.  Objective C had sucked all the fun out of Mac programming for me.

The shift to Objective C marked the end of Macintosh development for many other programmers I’ve talked to as well.  One can only conclude from their actions that Apple simply doesn’t care… if one programmer drops the platform, another will come around.  I’m sure there are plenty of other programmers around who either like Objective C just fine or who simply don’t care one way or the other.

As far as I’m concerned, Objective C is an ugly language, an ugly failed experiment that simply has no place in the world today.  It offers nothing substantial that we can’t get from other languages like C++, C#, or Java.  Nothing, that is, except for access to Apple’s tools and libraries.

Some Mac developers would tell you that the Cocoa libraries depend on some of Objective C’s capabilities like late-binding, delegates (as implemented in Cocoa), and the target-action pattern.  My response is that these people are confusing cause and effect.   The Cocoa libraries depend on those Objective C features because that was the best way to implement things with that language.  However, I have no doubt whatsoever that if Apple wanted to have a  C++ version of the Cocoa library, they could figure out a way to get things done without those Objective C features.

A Second Look

A few years later when I got my first Intel-based Mac, I decided to revisit the development tools.  I wrote a few simple programs.  I’d heard a few people express the opinion that Objective C was sort of like the Ugly Duckling… as I used it more and became familiar with it, it would grow into a beautful swan.  Nope.  Uh-uh.  Wrong.  No matter what I did, no matter what I do, Objective C remains just as frickin’ ugly as it was when I started.

I really wanted not to hate Objective C with a fiery vengeance that burned from the bottom of my soul, but what are ya gonna do?  Personally, I’m looking into alternatives like using C# with the Mono libraries.  No matter how non-standard these alternatives are, they can’t be any more icky than using Objective C.

Could It Be That Apple Doesn’t Care About Making Life Easier For Developers? 

The real question here is why the hell hasn’t Apple created a C++ version of the Cocoa library?  It’s been 12 years since Apple bought out NeXT.  Why hasn’t Apple made an effort in all that time to adapt the NeXTStep tools to use C++?  Or other modern languages like C#?  Microsoft may have invented the C# language, but even the Linux crowd has adopted it for gosh sakes!

Or why not annoy Sun and make a native-code version of Java with native Apple libraries?

Could it be they are trying to avoid the embarrassment that would occur when developers abandon Objective C en-masse as soon as there is a reasonable replacement?

Does Apple think developers are happy with Objective C?  Personally, I’ve yet to find a single programmer who actually even likes the language.  The only argument I’ve ever heard anybody put forth for using it has always been that it was necessary because it was the only choice that Apple offered.  I know that’s the only reason I use it.

Why does Apple continue to insist on inflicting Objectionable C on us?  I can only come to the conclusion that Apple simply doesn’t care if developers would rather use some other language.  It’s their way, or the highway.

May 4th, 2009 by Mike Fulton
Categories: Apple, Macintosh, Tech

Anybody who’s followed Apple for any length of time probably has a laundry list of things they wish Apple would do, or of things they think Apple should do, or things that they can’t understand why Apple hasn’t done already.  Here’s a few things on my own such list.

Sell OS X Separately From The Mac Hardware

One of the long-time questions is why doesn’t Apple they sell a version of Mac OS X that could be installed on standard PC hardware?  There are a lot of answers.

If the Mac OS X could easily be installed on just any PC machine, it would be heavily pirated.  Right now, there’s not too much of that going on because you need to jump through a lot of hoops to install the OS onto non-Mac hardware.  In effect, it’s like the Macintosh hardware is a big copy protection dongle for the Mac operating system.  Keep in mind that the Mac hardware is fairly profitable for Apple.  Selling the operating system by itself would certainly cut into Apple’s hardware sales, but with piracy to consider, there’s no certainty that it would generate enough profit to make up the difference.

Another factor is that currently Apple doesn’t have to worry about supporting twenty eight different motherboard chipsets and near-infinite numbers of combinations of chipset, video card, hard disk controller, network adapter, etc.  They have a relatively small number of hardware combinations to deal with, which makes it much easier to do testing and debugging and to create a stable crash-free system.  In theory, anyway.  Of course, the Mac does still crash from time to time, but it’s much less likely to do so because of some goofy bug in a hardware driver.

If Apple did sell their OS separately, it would have to include a much wider range of hardware drivers than it does now.  It would also need a lot more quality control testing to make sure that everything works properly.  This would be a significant expense, so Apple would need to be convinced they could sell enough copies to cover the additional costs.

One option Apple might consider is having some sort of “Macintosh Certified Hardware” program where only certain combinations of hardware will be supported.  If they limited official support to fairly recent hardware, it would make their task much easier.

Ironically, the current bad economy is probably a good thing for those wanting Apple to to sell the OS separately from the hardware.  Apple has been doing fairly well for the past few years, but you have to think that the downturn in the economy is going to have some impact on Mac hardware sales.  People just aren’t going to have as much money to spend and a lot of them will turn to less expensive computers instead of the Mac.  With that in mind, I suspect that there will come a day when Apple will decide to sell the Mac OS separately.  But it might be a few more years down the road. 

Make An Apple-Brand Netbook

In Apple’s own mind I’m pretty sure they think they’ve already created an Apple-brand netbook in the MacBook Air.  Think about it for a minute… Take a look at the MacBook Air and then at one of the popular PC netbooks like the Asus Aspire One or the Acer EeePC.  How much of a difference is there, really, in the hardware?  It really comes down to four areas.  The main processor, the screen size, the size of the keyboard, and the graphics processor.

The entry level MacBook Air has a 1.6ghz Core 2 Duo processor while most netbooks are running a single-core Atom processor running at 1.6 to 1.8ghz.  The dual-core processor offers a lot more performance but is more expensive and chews up more battery life.  Other than that the main difference is size. There might be a few other changes to the hardware to bring the cost down a bit, but mainly if Apple made the display and keyboard smaller, they’d have the netbook everybody is asking for. 

The MacBook Air also has an NVIDIA graphics processor instead of something cheaper.  I don’t think this should add tremendously to the cost of producing the machine, compared to alternatives, so my preference would be to keep the NVIDIA GPU.  However, if necessary, this is one area where costs could be reduced a bit.

Let’s say Apple were to make a new machine in the netbook category and price it at $599.  That’s a fair bit higher than your basic PC netbooks, but it’s far cheaper than any portable Mac has been to date.  Frankly, does anybody think Apple would come out with a netbook at the same price as the PC models?  The first thing that machine would do is completely kill the market for the MacBook Air.  Why would you pay $1800 for an MacBook Air when $600 would get you practically the same machine with just a slightly smaller screen?

One big problem with this scenario is that a $600 Apple netbook would make it very hard to argue against the idea that Apple’s hardware is overpriced.  Aside from the fact that other netbooks would still be $200+ cheaper, such a machine would also fuel the question of why Apple’s other laptops are that much more expensive.

Don’t get me wrong… I would absolutely love to see a little Apple netbook.  I’ve considered trying to install OS X onto an Asus Aspire One or something like that, but just haven’t gotten around to trying it quite yet.  But I think the only way that Apple would release such a machine would be as part of a larger change in strategy.  Perhaps they could release it as a second generation, much cheaper MacBook Air, for example. Taking the first generation model off the market would eliminate the price comparison.

One thing that I would like to see Apple do if they enter the netbook market is to include a 3G cellular phone/modem in the machine.  This would help to differentiate their offering from the other netbooks on the market.  One of the big mysteries of the MacBook Air has always been why Apple didn’t include a 3G modem.

App Store For Mac

Given the overwhelming success of the iTunes App Store for the iPhone, one can’t help but wonder why Apple hasn’t tried to setup something along the same lines for the Macintosh computer.  The idea has been tossed around enough times in the media that it’s inconceivable that it hasn’t been discussed a few times in Apple HQ conference rooms, but so far there’s been no hint that they play to make a move in this area.

It’s become commonplace for software publishers to allow you to buy their product online from their website, and then download the product to your computer.  This works fine in most cases, provided you’re looking for a certain specific title.  But what about those times when your search isn’t that specific?  Doing searches on Google simply isn’t going to offer the same sort of optimized and streamlined shopping experience that iTunes and the iPhone App Store offers.  With that and the success of the IPhone App Store in mind, one has to think that the time is ripe for a Mac-based App Store.

Of course, there are big important differences between the iPhone platform and the Mac platform that affect how things would work.  First and foremost, with the iPhone there is no other (official) way of getting software onto the machine other than via the App Store.  This gives Apple total control over what gets onto the phone, unless you’re one of those outlaws who have jailbroken your iPhone.  (“Jailbreaking” is the process of modifying your iPhone so that you can install software via other means than the App Store.  It’s against Apple’s license agreement.)

So, the first issue to be answered about a Mac App Store is whether or not the applications have to be approved by Apple before they’re offered for sale.  Even though the Mac App Store wouldn’t be the only means of distributing software like it is for the iPhone, it would very likely become one of the main channels.  Being rejected for it could have a big impact on an app’s sales.

Quite a lot of the iPhone apps that do not get approved for the App Store are rejected because they’re doing something that would cause conflict with Apple’s relationship with the cellular phone service providers like AT&T.  For example, there are apps that turn your iPhone into a wireless access point to which other devices can connect to get Internet access.  This is known as “tethering”.  Apple hasn’t allowed this with the iPhone so far because the amount of data bandwidth such a setup typically uses is far beyond what is normally consumed by non-tethered Internet access.

Other iPhone software gets rejected for the App Store because it could potentially provide a backdoor into the system.  For example, Apple has stated a few times that Flash isn’t available on the iPhone because they’re unhappy with the performance they’ve seen.  However, one of the fundamental abilities of Flash is the ability to download and run other Flash movies and access the internet.  Given the full spectrum of abilities normally available through Flash, it would be child’s play to create another means of installing applications on the system.  The same is true for Java.  Therefore it’s not hard to imagine that those issues are a factor, regardless of what reasons Apple may cite publically.

Neither of these situations apply to the Macintosh computer, however.  The Macintosh computer is an open system and always has been.  So I would imagine that developers would have much fewer worries regarding rejection for a Mac App Store than they do for the iPhone App Store.  Apple might want to impose restrictions on things like adult content, but that’s the main thing that comes to mind.

The iPhone App Store offers developers a 70/30 split of the sale price of an application.  That is, Apple keeps 30% and the developer keeps 70%.  Apple incurs all of the expenses involving in processing the purchases.  For the prices that a typical iPhone app goes for, that’s not a bad deal for either side.  However, given that computer software can be somewhat more expensive, I think Apple would have to offer a more favorable split to developers, at least for the higher priced stuff.  Maybe it would be 70/30 at the low end for the software that is under $30 and 90/10 for software that is $500 or more, with varying intermediate tiers.

For developers whose products are currently sold at retail this is probably a better deal than they get selling wholesale to dealers or distributors.  For developers whose products are currently sold mainly through their own websites, it’s a bit less, but it’s likely that increased sales volume would make up the difference.

There would be a lot of big advantages to having a Mac App Store.  For one thing, it would provide a much more effective channel for low priced software than anything we have now.  Looking for a $10 game for your Mac?  There are plenty of them out there… your job: find them.  Currently you’ve got to go browse and search the web somewhat haphazardly.  But with a Mac App Store, all you’d have to do is run iTunes and click the mouse a few times.

There are literally thousands of cute little apps for the iPhone that could just as easily be done for the Mac, but which don’t make sense for developers to do without something like the App Store in place to help market them.  Or which end up being priced at $19.99 when sold through the publisher’s own website simply because of the low sales volume.  A Mac-based App Store could create an entirely new niche market for small $5.00 and under applets.

The downside to the whole idea is that it could hurt Mac dealers.  They would likely have to cut prices on Mac software in order to give customers a reason to buy software in the store rather than online.  This might shift some sales from the Apple retail stores to the Mac App Store, but Apple would likely be getting a bigger profit off the latter so they wouldn’t be hurting themselves.  As for retail outlets other than the Apple Store, it’s hard to say if Apple would care one way or the other.

You know, the more I think about this one, the more I’m leaning towards the idea that it’s something that Apple very well might do sometime soon.  There doesn’t seem to be any big reason why they shouldn’t do it, and lots of reasons why they should.