When Apple recently announced their new iPhone 4 at their World Wide Developer’s Conference, they said it would be available for pre-order or pickup reservation on June 15 in preparation for the June 24th launch date. And true to their word, sometime not quite an hour after midnight, Pacific time, the Apple website began allowing people to either pre-order phones for shipment or to make a reservation to pick up the new phone(s) at their favorite local Apple Store.

This worked pretty well, for perhaps a minute and a half or so.  Then the proverbial you-know-what hit the fan.

Almost as soon as the Apple website started processing orders and reservations, things started to go wrong.  A wide variety of problems started happening and it wasn’t long before your chances of winning the lottery were better than your chances of completing a transaction on their website.

I personally tried numerous times to make a pick-up reservation, starting at about 12:30 am, but the website simply could not complete the transaction.  I tried doing it with my iPad.  I tried with my PC and Internet Explorer, Chrome, and Firefox.  I tried with my iMac and Safari 5.  Nothing worked. 

I finally gave up and went to bed.  In the morning, I jumped out of bed and tried again.  No joy.   So I showered and got dressed and tried once again before leaving for work, but the only difference was that some of the error messages were different. 

Once I was at work, I started trying periodically to see if anything had changed, and finally at about 1:30pm I was able to complete the transaction for my reservation.

At least… I think I did.  Complete it, that is.  More on this uncertainty a bit later.

From what I’ve heard from others, it was late in the afternoon before the website started to work somewhat more properly.

There was something going around about a hacker attack on the AT&T website contributing to all the problems.  That may indeed have been a factor, but I don’t think it would have been such a big issue if the design of the system were more robust.  I haven’t heard any sort of official statement about the cause of the problem, but it’s not hard to spot several potential trouble spots with the design of the web pages and pre-order system.

How It Was Supposed To Work

First let’s go over the steps that were SUPPOSED to happen, then we can come back and look at what went wrong.

  • Step 1 — the user selects “Pre-Order” from the top right corner of the website’s “iPhone” landing page.
  • Step 2 — The user is presented with a screen where they specify more precisely which model they want.  For the moment, the choices are the 8gb 3GS, or else 16gb or 32gb, black iPhone 4.  The promised white model iPhone 4 is not yet available.  This step also shows you the ship date.  It originally said June 24, launch date, but as of now it’s saying July 14, so apparently Apple has already exhausted their initial stock.
  • Step 3 — The next step is for the user to specify their AT&T account status.  You tell them if you’re a new customer or an existing one, and also if you’re upgrading an existing phone or adding a line.
  • Step 4 — If you’re upgrading an existing phone, this step requires you to enter your mobile phone number, zip code, and last 4 digits of your SSN.  This information is used to determine your current phone plan and upgrade eligibility.
  • Step 5 — Now you get to see which price they’re asking for the model selected.  The price depends on your upgrade status and model.  Press CONTINUE again to get to the next step.
  • Step 6 — If you’re making a pick-up reservation, then you’ll see a list of your three closest Apple stores.  You can select one from the list, or enter a zip code to look up other choices.  Once you have the right store selected, you click “Continue” and…
  • Step 7 — This is the last page.  It tells you that you can go to the store any time after it opens on the 24th to pick up your new phone, and that you’ll need to bring your ID.

What Really Happened, And How It Might Have Been Avoided

So what really happened and why was it such a disaster?  Well, since I tried and and had it fail on me something like 40 times before it finally worked, I have some ideas on that subject.

I’m going to discuss the process of making a reservation to pickup the phone in-store.  Undoubtedly making a pre-order to have a phone shipped to you is somewhat different, although there is probably some overlap in the earlier steps of the process.

Steps 1-3 are more or less simple web pages without any significant processing going on.  The first big weak point with the whole setup happens at step 4, where Apple’s website has to send the data they’ve collected to AT&T’s server in order to retrieve your account status.  The issue here is that if AT&T’s server is slow to respond or fails altogether, then Apple’s website is unable to proceed.

This failure to get a timely response from the AT&T server is the primary cause of the whole problem, but it wouldn’t have been so bad if Apple had made a better attempt to recover from the error, rather than simply throwing up an “Opps, We’re sorry” message.  As the site’s currently designed, the problem essentially feeds on itself and makes itself worse and worse, like a snowball rolling downhill. 

Here’s the problem: as soon as errors start to occur, people go back to the beginning and start the whole thing over again, at least a few times.  This just makes the server load worse at a point where it’s already throwing errors.   And that means more failures, and more failures mean more and more people starting over and failing again, then starting over and failing again, etc. 

One preventative measure that could have helped to avoid these cascading failures would have been to cache the response from the AT&T server in step 4.  This could have been done very easily by placing a cookie on the user’s system, or by using server session variables.  Sure, there are some legal concerns here, and it may have required a paragraph or two of fine print, but I doubt most people wanting to pre-order the new iPhone would be too concerned about Apple and AT&T sharing information.  Most of those people probably expect that information is shared in the first place.

Also consider that better use of Ajax in these pages might have meant the information could be saved temporarily without the need for either cookies or server session variables, avoiding the legal concern altogether.

It’s less clear if Apple’s site had to contact AT&T’s servers again for later steps in the overall process, but if so, then the same ideas apply to those steps as well. Caching the responses from AT&T wouldn’t matter when everything is working fine, but in a failure situation it would help reduce the overall load on the server, and that could have made all the difference in the world.

Another thing is, Apple should have had some mechanism in place to monitor the server load and take appropriate action if it was getting too high.  For example, for in-store pickup reservations, instead of this interactive step-by-step process, it could have simply collected the user’s data and saved it (along with a timecode) for later processing and confirmation via email.  This way, the server load for confirming the information with AT&T’s server could be independent of the number of people attempting to make reservations.

Apple’s webpages seem to have no idea how to deal with failure other than by giving the user an error message.  They should have had some built-in timeout and retry mechanism so that they could recover if the server did not respond the first time. There should have been some sort of more detailed indication given to the user about what was going on.  2 or 3 minutes of staring at the little rotating animated icon gets a tad boring.

And speaking of error messages, I cannot even begin to tell you how disgusted I am regarding this message I saw on the majority of my failed attempts last night and this morning.  After I’d sit patiently for several minutes waiting for Apple’s site to get a response back from AT&T”s servers or from their own servers, they had the unadulterated audacity to tell me that my session was closed due to inactivity!  Whose inactivity are they talkin’ about here?!   And aside from the inherent stupidity of the message in general, it’s compounded by the fact that you’d get this message within 4-5 minutes of starting the pre-order process.  Who the heck decided that 5 minutes was the ideal lifetime of these sessions?

Now if you tried the system and got as far as step 5, congrats.  Most of my failed attempts died in step 4.  However, at the step 5 to step 6 transition, we’re back to pinging the server for information.  Not sure what, except maybe the list of Apple stores nearest you.  And once again, this is information that should have gotten cached so that we don’t have to bang the server if we end up going through process a 2nd time.  Or a 3rd time.  Or a 33rd time.  And there’s really no excuse here, since this step didn’t involve sharing data with AT&T.

I got as far as step 6 on five occasions.  The first four times it failed.  The fifth time, it finally went on to the final page, telling me that my reservation was complete and that I needed to bring my ID with me when I came to pick up the phone.  However, when I tried to print the page, the browser (Apple’s Safari) crashed on me!

After the print attempt crashed, it occurred to me that there was no email confirmation of the reservation.  This seemed odd, especially since I’d just done a similar process a few months earlier when I reserved an iPad, and they DID send me a confirmation email that time. This lack of confirmation is why I had a degree of uncertainty about having successfully completed the reservation process.

This is an APPLE User Interface?

While one might make the argument that “web pages don’t count” or something like that, as a long-time Apple user I expect better user interface design than what we see in these pre-order pages, even when they’re working properly.

For starters, the overall process was broken up into too many steps, and some of those are unnecessary.  Why make the user go through any more steps than absolutely necessary?  Sometimes if you have a big, complex form, there’s a good argument that breaking it up into smaller chunks will make it easier for the user, but really that’s just not appliciable here. There’s really just not that much information collected from the user.

In step #2, you’re shown a choice of phones and must select a different “Pre-Order” button for each in order to proceed.  And yet, we see our choice again in step #5 and have to press “continue” to keep going.  I understand that step #5 is also verifying the price after your account information has been confirmed, but we could have combined that into step #6 quite easily.

Another thing is, verifying your account information isn’t strictly necessary for all of the steps that follow.  It’s really only needed so we can see the right price.  So why is this a synchronous process?  That brings up the point that this whole process was actually using multiple separate pages instead of a single page where everything was refreshed as needed via Ajax.  Making better use of Ajax would have made the pages worth more smoothly and also would have reduced the overall server load.  It also could have meant that the pages could remember the user’s input in step 3 for use later, without having to use cookies or session variables.

Last of all, I’m amazed that when I finally did get to the end of the process, there was no mention of any confirmation email being sent out (nor did I receive one anyway).  This is a big oversight.

Here’s how it should have worked:

Here’s what the process should have been.

Step 3 — Ask the user for their mobile phone number, zip code, and last 4 digits of SSN.  Also ask for their name and email address.  If this is not the user’s first time through this step, then these fields should be pre-populated with the information saved in a cookie or the server’s session information.

Step 4 — Send data off to AT&T for account validation, asynchronously.  While waiting for response, show the user price information based on the input from step 3.  Include a big note that this information is pending account confirmation. When the account information is validated or not, update the display.  Show a selection of Apple stores within 50 miles of the user’s zip code so they can select for in-store pickup.

If AT&T’s server fails to respond within a reasonable timeframe, then Apple should SAVE my information into its own database so that it can retry later and send me an EMAIL-based confirmation once it’s able to get a response.

Step 5 — User selects “continue” to confirm reservation.  Page posts transaction to the Apple server, and is updated with final instructions once the transaction completes.  We’re also told that an email confirmation will arrive soon.

We’ve eliminated two steps, made it easier for the user if this is a retry, reduced the server load, and given the user more peace of mind with a confirmation email.

Way back in 1985, I started my “professional” career as a software guy as a developer for the brand new Atari ST computer.  After a few years as a 3rd party developer, I was hired by Atari to provide developer support to ST developers in the USA. 

Part of what made me a good choice for that role was that I had a really good in-depth understanding of GEM.   For example, when I worked on the WordUp word processor for Neocept, I wrote more than a dozen GDOS printer drivers for various printers, including color, that Atari’s drivers didn’t support.  Quite a lot of that information is still burned deep into my brain, even though it’s been many years since I actually wrote any code for the Atari.

These days, when something reminds me of GEM for some reason, the main things that come to mind are the various problems, glitches, and workarounds for various things.  This article is going to be mainly about the various design flaws in GEM, their workarounds, and how they impacted development.

GEM – The Origins

In the mid 80’s, just as computers were starting to break out of their character-based screens into more graphically oriented environments, Digital Research came out with GEM, or the Graphics Environment Manager.  The idea was to offer a graphic-based environment for applications that could compete with the brand new Macintosh computer, and Microsoft’s new Windows product.

GEM started life in the late 70’s and early 80’s as the GSX graphics library.  This was a library that could run on different platforms and provide a common API for applications to use, regardless of the underlying graphics hardware.  This was a pretty big deal at the time, since the standard for graphics programming was to write directly to the video card’s registers.  And since every video card did things a little differently, it often meant that a given application would only support one or two specific video cards.  The GSX library would later become the basis of the VDI portion of GEM, responsible for graphics device management and rendering.

GEM was basically a marriage of two separate APIs.  The VDI (Virtual Device Interface) was responsible for all interaction with graphics devices of any sort, while the AES (Application Environment Services) was responsible for creating and managing windows, menu bars, dialog boxes, and all the other basic GUI components that an application might use.

GEM was first demoed running on the IBM PC with an 8086 processor, running on top of MSDOS.  However, various references in the documentation to the Motorola 68000 processor and integration with their own CP/M-68K operating system as the host make it seem clear that that DR intended GEM to be available for multiple processors at a relatively early stage of development.

Ironically, the PC version of GEM never really took off.  Other than being bundled as a runtime for Ventura Publisher, there were never any major applications written for the PC version.  Overall, it was the Atari ST series where GEM found its real home.

Overview of GEM VDI

In case you never programmed anything for GEM VDI, let me give you a brief overview of how it worked.  The first thing you do in order to use a device is open a workstation.  This returns a variety of information about the device’s capabilities.  Another API call you can do once the workstation has been opened will give you additional information about device capability.  Once you have an open workstation, you can execute the appropriate VDI calls to draw graphics onto the device’s raster area.

Most devices aren’t meant to be shared so you can only have one workstation open at a time.  However, in order to support multitasking with multiple GEM applications and desk accessories running together, you need to be able to share the display.  Therefore, the VDI supports the notion of opening a “virtual” workstation which is basically a context for the underlying physical workstation. 

GEM VDI Design Issues

The VDI has a number of huge design flaws that are easily recognized today.  I’m generally not talking about missing features, either.  I’m sure we could come up with a long list of things that might have been added to the VDI given enough time and resources.  I’m talking about flaws in the intended functionality.  Many of these issues were common cause for complaint from day one.

Also, let me be clear about this: when I suggest some fix to one of these flaws, I’m not saying someone should find the sources and do it now.  I’m saying it should have been done back in 1983 or 1984 when Digital Research was creating GEM in the first place.  Any of these flaws should have been noticeable at the time…  most of them are simply a matter of short-sightedness.

No Device Enumeration

Until the release of FSMGDOS in 1991, 6 years after the ST’s initial release, there was no mechanism for an application to find out what GEM devices were available, other than going through the process of attempting to open each possible device number and seeing what happened.  This was slow and inefficient, but the real problem underneath it all is a bit more subtle.  Even once FSMGDOS hit the scene, the new vqt_devinfo() function still required you to test every possible device ID.

The fix here would have been simple.  There should have been a VDI call that enumerated available devices.  Something like this:

typedef struct
{
/* defined in VDI.H - various bits of device info */
} VDIDeviceInfo;

VDIDeviceInfo deviceinfo[100];
numdevices = 0;
dev_id = 0;
while( dev_id = vq_device(dev_id, &deviceinfo[numdevices++]) );

The idea here is that the vq_device() function would return information about the next available device with a number higher than the dev_id parameter passed into it.   So if you pass in zero, it gives you info on device #1 and returns 1 as a result.  When it returns zero, you’ve reached the end of the list.

Device ID Assignments

Related to the basic problem of device enumeration is the whole way in which device IDs were handled overall.  GEM graphics devices are managed by a configuration text file named assign.sys that lived in the root directory of your boot volume.  This file would look something like this:

PATH=C:\SYS\GDOS
01 screen.sys
scrfont1.fnt
21 slm.sys
font1.fnt
font2.fnt
font3.fnt

The first line specifies the path where device driver files and device-specific bitmapped fonts were located.  The rest of the file specifies the available devices and the fonts that go with them.  For example, device 21 is the “slm.sys” driver, and “font1.fnt”, “font2.fnt” and “font3.fnt” are bitmapped font files for that device.

The device id number is not completely arbitrary.  There are different ranges of values for different device types.  For example, devices 1-10 were considered to be screen devices, 11-20 were considered to be pen plotter devices, 21-30 were printer devices, and so forth.  Oddly complicating things in a few places is Digital Research’s decision to mix input devices like touch tablets together with output devices like screens and printers.

The way device IDs worked was mainly a contributing factor in other situations, rather than a problem in its own right.  For example, because there was no easy way to enumerate available devices, many applications simply made the assumption that the printer was always going to be device 21 and that the metafile driver was device 31.  And in most cases, that’s all they would support.

The bigger problem, however, was that while the device ID assignments were mostly purely arbitrary, they were anything but arbitrary for the display screen.

Getting The Screen Device ID

Remember earlier when I explained how applications would open a “virtual” workstation for the screen?  Well, in order to do that, you have to know the handle of the physical workstation.  That’s something you get from the GEM AES function graf_handle().  One would think, since the physical workstation is already open, that you shouldn’t need to tell VDI the device ID, right?  Wrong.  Even though the physical workstation for the screen device is already opened by the GEM AES, you still need to pass the device ID number as one of the parameters when you open a virtual workstation.  So how do you get the device ID for the screen device that’s already open?  Well, there really isn’t a good answer to that question, and therein lies the chocolaty center of this gooey mess. 

On the Atari, the recommended method was to call the BIOS function GetRez() and add 2 to the returned value.  The first problem with this idea is there is no direct correlation between that value and anything like the screen resolution or number of colors available.   And even if there was some correlation, there are far more different screen modes than you can fit in the device ID range of 1-10.

Furthermore, this method only really worked for those video modes supported by the built-in hardware.  Add-on cards needed to not only have a driver, they also needed to install a patch to make GetRez() return the desired value when other video modes were used.

This pissed me off then, in large part because developers didn’t univerally follow the recommended method and their code broke when Atari or third parties introduced new hardware.  In fact, the very first article that I wrote for the ATARI.RSC Developer newsletter after I started at Atari was about this very subject. 

Looking back, the thing that pisses me off the most about this is the fact that I can think of at least three really easy fixes.  Any one of them would have avoided the situation, but all three are things that probably should have been part of GEM from day one.

The first, and most obvious, is that opening a virtual workstation shouldn’t require a device ID as part of the input.  The VDI should be able to figure it out from the physical workstation handle.  Seriously… what’s the point?  The device is already open!

Another option would have been adding a single line of code to the GEM AES function graf_handle() to make it also return the device ID number, rather than just the handle of the physical workstation.  If you’re going to insist on passing it as a parameter to open a virtual workstation, this is what makes sense.  After all, this function’s whole purpose is to provide you with information about the physical workstation!

Lastly, and independent of the other two ideas, there probably should have been a VDI function that would accept a workstation handle as a parameter and return information about the corresponding physical workstation, including the device ID.  This arguably comes under the heading of “new” features, but I prefer to think that it’s an essential yet “missing” feature.

Palette-Based Graphics

Perhaps the biggest flaws about GEM VDI are based in the fact that that the VDI is wrapped around the idea of a palette-based raster area.  This is where each “pixel” of the raster is an index into a table containing the actual color values that are shown.  Moreover, it’s not even a generic bit-packed raster.  The native bitmap format understood by GEM VDI is actually the same multiple bitplane format as what most VGA video cards used. 

Considering that the goal of the VDI was to create an abstract, virtual graphics device that could be mirrored onto an arbitrary actual piece of hardware, this is hard to forgive.

At the very least, the VDI should have acknowledged the idea of raster formats where the pixel value directly represents the color being displayed.  I’ve often wondered if this failure represents short-sightedness or a lack of development resources.

One might make the argument that “true color” video cards were still a few years away from common usage, and that’s undoubtedly part of the original thinking, but the problem is that this affects more than just the display screen.  Many other devices don’t use palette-based graphics.  For example, most color printers that were available back then had a selection of fixed, unchangeable colors.

Inefficient Device Attribute Management

Quite a lot of the VDI library consists of functions to set attributes like line thickness, line color, pattern, fill style, fill color, etc.  There’s an equally impressive list of functions whose purpose is to retrieve the current state of these attributes.

For the most part, these attributes are set one at a time.  That is, to set up the attributes for drawing a red box with a green hatched fill pattern, you have to do the following:

vsl_type( screenhandle, 0 );    // set solid line style
vsl_width( screenhandle, 3 );  // set line thickness of 3 pixels
vsl_color( screenhandle, linecolor );
vsf_color( screenhandle, fillcolor );
vsf_interior( screenhandle, 3 );
vsf_style( screenhandle, 3 );

By the way, we’re making the assumption here that the linecolor and fillcolor variables have already been set to values that represent red and green colors in the current palette.  That’s not necessarily a trivial assumption but let’s keep this example modest.

At first glance you might say, “Well, six lines of code… I see how it could be improved but that’s really not that terrible.

It really is… if you know how GEM VDI calls work, you’ll recognize how it’s horribly, horribly bad in a way that makes you want to kill small animals if you think about it too much.  Each one of those functions is ultimately doing nothing more than storing a single 16-bit value into a table, but there’s so much overhead involved in making even a simple VDI function call that it takes a few hundred cycles of processor time for each of these calls.

First, the C compiler has to push the parameters onto the stack and call the function binding.  The function binding reads the parameters off the stack and then saves them into the GEM VDI parameter arrays.  Then it loads up the address of the parameter arrays table and executes the 68000 processor’s trap #2 function.  This involved a context switch from user mode to supervisor mode, meaning that all of the processor’s registers and flags had to be saved on entry and restored on exit.  From there, GEM picks up the parameters and grabs the appropriate function pointer out of a table, and then passes control to that function.  At that point, the very, very special 16-bit value we cared about in the first place is lovingly deposited into the appropriate location within the table that the VDI has allocated for that particular workstation handle.  Then the function exits and starts making its way back up to your code. Along the way, there is much saving and restoring of 32-bit registers.  Those are uncached reads and writes on most ST systems, by the way.

The bottom line is that for things like this, GEM was simply horribly inefficient. And this could have been quite easily avoided, is the really bizzare part.

The way that 68000-based programs make GEM VDI calls is to load a magic code into the 68000’s d0 register, and the address of the VDI parameter block in the 68000’s d1 register, and then make a trap #2 call.  The parameter block is simply a list of pointers to the 5 arrays that GEM VDI uses to pass information back and forth with the application.  My idea is simply to add another pointer to the VDI parameter block, pointing to a structure that maintains all of the current drawing attributes of the workstation, including the handle and the device ID.

Suppose that opening a physical workstation (for device #21 in this example) looked something like this:

int v_opnwk( int devID, VDIWorkstation *dev, VDIContext *context );

VDIWorkstation printerDevice;
int handle = v_opnwk( 21, &printerDevice, v_getcontext(0L) );

Opening a virtual workstation is similar, except that we specify the handle for the open physical workstation instead of the device ID:

int v_opnwk( int physHandle, VDIWorkstation *dev, VDIContext *context );

VDIWorkstation screenDevice;
int handle = v_opnvwk( phys_handle, &screenDevice, v_getcontext(0L) ); 

Thereafter, VDI calls look much the same, except that instead of passing the handle of your workstation as a parameter, you pass a pointer to the desired VDIWorkstation structure:

v_ellipse( &screendevice, x, y, xrad, yrad );

instead of:

v_ellipse( handle, x, y, xrad, yrad );

The VDIWorkstation structure would look something like this:

typedef struct {
         VDIContext *ws;
         int *control;
         int *intin;
         int *ptsin;
         int *intout;
         int *ptsout;
} VDIWorkstation;

typedef struct {
         int contextSize;
         int handle;
         int deviceID;
         int lineType;
         int lineWidth;
         int lineColor;
     /* other various attribute fields listed here */
} VDIContext;

The heavy lifting is really done by the addition of the VDIContext structure. The first parameter would be a size field so the structure could be extended as needed.  And a new function called v_getcontext() would be used to allocate and initialize a context structure that resides in the application’s memory space.

With this setup, you would be able to change simple things like drawing attributes by direct manipulation of that context structure.  Let’s return to the example of setting up the attributes to draw a red rectangle with green hatch fill pattern.  Instead of the lines of code we saw earlier, we could instead have something like this:

screenDevice.ws->lineType = 0;  // set solid line style
screenDevice.ws->lineWidth = 3;  // set line thickness of 3 pixels
screenDevice.ws->lineColor = linecolor;
screenDevice.ws->fillColor = fillcolor;
screenDevice.ws->fillInterior = 3;
screenDevice.ws->fillStyle = 3;

This requires no function calls, no 68000 trap #2 call, no pushing or popping a ton of registers onto and off of the stack.  This entire block of code would take fewer cycles than just one line of code from the first example, by a pretty big margin.

The one thing that this does impact is the creation of metafiles, since attribute setting would no longer generate entries in the output file.  But that is easily solved by creating a new function, let’s call it vm_updatecontext(), which would simply take all the parameters from the context structure and output them to the metafile all at once.

These are relatively simple changes from an implementation standpoint, but they would have had a significant impact on the performance of GEM on the 68000, and I suspect the difference would be comparable on the 808x processors as well.

More coming in part 2

In part 2 of this, written whenever I get around to it, I’ll talk more about the VDI including more stuff about true color support, and outline font support — too little, too late?

May 4th, 2009 by Mike Fulton
Categories: Apple, iPhone, Macintosh, Tech

Once upon a time, Steve Jobs was the leader of a company called Apple.  Apple was known for being a technology leader, and their latest products were the envy of the industry.  Sadly, though, Apple’s sales figures didn’t seem to be able to keep pace with their reputation.  The board of directors of Apple, thinking that another style of management might be the way to go, decided that they’d had enough of Steve and handed him his walking papers.  The year was 1985.

Steve’s response to the situation was to start another computer company, called NeXT.  The Apple Macintosh was supposed to be the “computer for the rest of us” but with NeXT, it seemed Job’s goal was to create the “computer for the best of us“.  Largely inspired by his experience with getting the Macintosh into the education market, the NeXT Computer was going to be a powerful workstation designed to meet the needs of the scientific and higher educational community.  At the heart of this new computer was going to be NeXTStep, an object-oriented multi-tasking operating system that included tightly integrated development tools to aid users in quickly creating custom applications.

NeXTStep’s Language Of Choice

At the heart of NeXTStep was a fairly new programming language known as Objective C.  It was basically an extension of the C language to add Smalltalk-style messaging and other OOP features.  Conceptually it’s not too far off from where C++ was at the time, but the syntax is fairly different.  However, that simply didn’t matter at the time because most programmers hadn’t done much, if anything, with C++.

In 1985, any sort of object oriented programming was a relatively new thing to most programmers.  Modern languages like Java and C# were still years in the future, and C++ was still largely an experiment, with no standard in place and drastic differences from one implementation to the next.  In fact, most C++ solutions at the time were based on AT&T’s CFront program, which converted C++ code into standard C code that would then be compiled by a standard compiler.  It would be a few years yet before native C++ compilers became commonplace.

There were other OOP languages around, like Smalltalk or Lisp, but they were largely considered acedemic languages, not something you’d use to create shrink-wrapped products.

Since there simply wasn’t any better solution, the choice of Objective C for NeXTStep was completely reasonable at the time.

What Happened NeXT

The first version of NeXTStep was released in Sept. 1989.  Over the next few years, the NeXT computer and NeXTStep made a number of headlines and gained a lot of respect in the industry, but failed to become a major player in terms of sales.  In late 1996, NeXT had just teamed up with Sun Computer to create a cross-platform version called OpenStep, but before that really took off, something else happened.

In 1996, Apple was floundering.  Their stock price was down.  They’d had layoffs.  They had no clear plan for the future in place, and they were in serious danger of losing their place as the master of the graphic user interface.  Microsoft had just released Windows 95, which was a huge leap forward from Windows 3.1 in virtually every way, and PC video cards offering 24-bit and 32-bit color modes had become easily affordable.

Apple CEO Gil Amelio was fairly sure that updating the Mac to use some sort of object-oriented operating system was key to Apple’s future success, but Apple’s internal development had thus far failed to pay off.  Likewise Apple’s investment in Taligent, a company formed in partnership with IBM for the sole purpose of developing an object oriented operating system.  But then Amelio struck a bargain to purchase NeXT Computer and the NeXTStep operating system, bringing NeXT CEO Steve Jobs back into the fold, first as an advisor and then as CEO several months later when Amelio was shown the door.

It took Apple nearly 4 years to integrate their existing operating system with the NeXTStep tools and libraries, but ultimately NeXTStep formed the basis of the new Macintosh OS X operating system, released in March 2001.

Mac Development Tool History

When the Macintosh was first released in early 1984, you pretty much used either 68000 assembly language or Pascal to create programs.  Pascal had always been a popular language with the Apple crowd.  Apple had a set of development tools known as the Macintosh Programmer’s Workshop, which was essentially a GUI interface wrapper for a variety of commandline oriented tools, including the 68000 assembler and the Pascal language compiler.

It didn’t take long for the C language became available for the Mac.  Apple released a version for MPW, but it really took off with the release of LIGHTSPEED C (later renamed to THINK C), which had a GUI IDE of the sort that would be completely recognizable as such even today, almost 25 years later.  Think’s compiler quickly became the defacto standard development environment for the Mac.  Support for C++ would be added in 1993 with version 6.0, after the product was acquired by Symantec.

Unfortunately, when Apple made the transition from the Motorola 680×0 processor family to the PowerPC processor in 1994 & 1995, Symantec C/C++ failed to keep pace.  It wasn’t until version 8, released in 1997, that their compiler was able to generate native PowerPC code. 

Fortunately, a new player in the game appeared to save the day.  When Symantec bought out Think, some members of the Think C development team started a new company called Metrowerks.  While Symantec was struggling to bring out a PowerPC compiler, Metrowerks released their new CodeWarrior C/C++ environment.  In many ways, Codewarrior was like an upgrade to the Symantec product, and it quickly supplanted Symantec among developers.  Codewarrior would remain at the top of the heap until Apple released OS X.

The NeXT Development Tool

When Apple released Mac OS X in 2001, there were two big paradigm shifts for developers.  The first was that Apple now included their development tools with the operating system, at no additional charge.  After nearly two decades of charging premium prices for their tools, this was a big change.  Plus, the new XCode environment was an actual IDE, unlike the old Macintosh Programmer’s Workshop environment, with support for Objective C, C, C++, and Java.

The second paradigm shift was that everything you knew about programing the Mac was now old news.  You could continue to use an existing C/C++ codebase with the new Carbon libraries providing a bridge to the new OS, but this did not allow you to use the new tools such as the Interface Builder.  If you wanted to take full advantage of the new tools Apple and the Cocoa libraries, you needed to use Objective C instead of the familiar C or C++.

Objectionable C

I had been a Mac programmer since getting my first machine in 1986, and when Apple released Mac OS X in 2001, I was fully expecting to continue that tradition.  However, while I had no problems whatsoever with the idea of learning a new set of API calls, or learning new tools, I saw no good reason why it should be necessary to learn a new programming language.  Still, at one time in my younger days I had enjoyed experimenting with different programming languages, so I figured why not give Objective C a try?

Upon doing so, my first thought was, this was an UGLY language.  My second thought was, why did they change certain bits of syntax around for no good reason?  There were things where the old-style C syntax would have gotten the job done, but they changed it anyway.  The third thing that occurred to me was that this was a REALLY UGLY language.

After a few brief experiments, I pretty much stopped playing around with Cocoa and Objective C.  I started playing around with Carbon.  My first project was to rebuild an old project done in C++.  But the first thing I ran into was frustration that I couldn’t use the new tools like the Interface Builder.  It wasn’t too long before I decided I wasn’t getting paid enough to deal with all this BS.  Objective C had sucked all the fun out of Mac programming for me.

The shift to Objective C marked the end of Macintosh development for many other programmers I’ve talked to as well.  One can only conclude from their actions that Apple simply doesn’t care… if one programmer drops the platform, another will come around.  I’m sure there are plenty of other programmers around who either like Objective C just fine or who simply don’t care one way or the other.

As far as I’m concerned, Objective C is an ugly language, an ugly failed experiment that simply has no place in the world today.  It offers nothing substantial that we can’t get from other languages like C++, C#, or Java.  Nothing, that is, except for access to Apple’s tools and libraries.

Some Mac developers would tell you that the Cocoa libraries depend on some of Objective C’s capabilities like late-binding, delegates (as implemented in Cocoa), and the target-action pattern.  My response is that these people are confusing cause and effect.   The Cocoa libraries depend on those Objective C features because that was the best way to implement things with that language.  However, I have no doubt whatsoever that if Apple wanted to have a  C++ version of the Cocoa library, they could figure out a way to get things done without those Objective C features.

A Second Look

A few years later when I got my first Intel-based Mac, I decided to revisit the development tools.  I wrote a few simple programs.  I’d heard a few people express the opinion that Objective C was sort of like the Ugly Duckling… as I used it more and became familiar with it, it would grow into a beautful swan.  Nope.  Uh-uh.  Wrong.  No matter what I did, no matter what I do, Objective C remains just as frickin’ ugly as it was when I started.

I really wanted not to hate Objective C with a fiery vengeance that burned from the bottom of my soul, but what are ya gonna do?  Personally, I’m looking into alternatives like using C# with the Mono libraries.  No matter how non-standard these alternatives are, they can’t be any more icky than using Objective C.

Could It Be That Apple Doesn’t Care About Making Life Easier For Developers? 

The real question here is why the hell hasn’t Apple created a C++ version of the Cocoa library?  It’s been 12 years since Apple bought out NeXT.  Why hasn’t Apple made an effort in all that time to adapt the NeXTStep tools to use C++?  Or other modern languages like C#?  Microsoft may have invented the C# language, but even the Linux crowd has adopted it for gosh sakes!

Or why not annoy Sun and make a native-code version of Java with native Apple libraries?

Could it be they are trying to avoid the embarrassment that would occur when developers abandon Objective C en-masse as soon as there is a reasonable replacement?

Does Apple think developers are happy with Objective C?  Personally, I’ve yet to find a single programmer who actually even likes the language.  The only argument I’ve ever heard anybody put forth for using it has always been that it was necessary because it was the only choice that Apple offered.  I know that’s the only reason I use it.

Why does Apple continue to insist on inflicting Objectionable C on us?  I can only come to the conclusion that Apple simply doesn’t care if developers would rather use some other language.  It’s their way, or the highway.

« Previous Entries