September 11th, 2019 by Mike Fulton
Categories: Adobe, Application Design, Creative CLoud, Uncategorized

Adobe Lightroom is an application designed to catalog and organize your collection of image files like digital photos, scanned images, or even just all that porn you keep downloading. There are two versions of the program these days. The first is Adobe Lightroom Classic. This is essentially the modern version of the original Photoshop Lightroom v1.0 from 1997. The other is Adobe Lightroom (originally known as “Lightroom CC“), and there are some hugely important differences. In fact, I personally think the name handling is a big mistake on Adobe’s part. The newer version should be called Adobe PhotoCloud or something like that. It would more accurately describe what’s going on and reduce confusion.

The Original Lineage

When it first came out, LightRoom was closely associated with Adobe’s flagship desktop application, Photoshop, but they couldn’t quite decide what the branding was supposed to be. There were some very distinct differences like the fact that Photoshop was very clearly oriented around editing individual images and getting into them at the pixel level. Lightroom was clearly descended from the mass-consumer product Photoshop Album and was oriented around managing collections of images and the editing functions were designed to work on entire images, not pixel by pixel.

Photoshop got it’s start when high-quality image editing started with a high-resolution scanner. In fact, my first copy of Photoshop came bundled with my first really good flatbed scanner.

Lightroom was born when digital cameras started to become a thing you didn’t need to take out a second mortgage to buy. The move to digital meant everything about Photography was changing. Wedding photographers used to shoot film and come home with a bag of exposed film which they’d have to go drop off at the lab and then wait a couple days get dozens or even hundreds of small proof prints. Now, Wedding photographers might come home from a shoot with a bag of memory cards and a few hundred digital images that deeded some sort of post-production work, like tweaking the exposure or the color balance. The scripting and batch automation functions in Photoshop could do the job in a pinch but it was not an optimal process.

At first, photographers would discuss and debate which was the right product for them, but it slowly became clear that Photoshop and Lightroom were complementary products, more than competitors.

One of the really cool things about Lightroom back when it first came out was that it was designed around the idea of non-destructive editing, You could import a bunch of images from your digital camera, select one, and then apply your basic color and exposure adjustments as desired. T he program was well integrated with the RAW image formats used by better digital cameras, so you could dial in your adjustments and save a preset development setting with that information. For example, you might have a group of pictures which were taken on an overcast day and you want to adjust the color balance and contrast. Once you’ve done that with one image and created a preset, you can easily select another 50 images, or 200, or even 2000, and then select that preset to apply your color balance and contrast settings to the whole batch. Instead of the old fashioned method of actually going into each image file and adjusting the pixel values o each one, which could be possible to undo later, Lightroom simply makes annotation in the catalog of the adjustments that have been made. From that point on, whenever the image is displayed, printed, exported for web output, etc., those adjustments will be applied. You can make further adjustments at any time, or even reset everything back to the original version.

Classic also had a bunch of other cool features like the ability to create web galleries from the images in your catalog. It would generate HTML files, thumbnails, resized images, and then either save the whole thing out to a specified location or upload it directly to a website.

It could print contact sheets for you. Select a batch of photos, and it could print sheets of images ranging from full page down to the size of a slide. You could also connect to photo sharing sites like Flickr and upload images.

Common questions in those early days included , how is Lightroom different from Photoshop? Which one is right for me? The answer is that Photoshop is for advanced editing of individual images, while Lightroom is designed to manage collections of images and provide some basic work-flow oriented editing. In addition to the things we’ve mentioned, Lightroom has functions for removing redeye, and basic touchup for skin blemishes, but it doesn’t let you get down and dirty and edit thigs pixel by pixel. It doesn’t give you access to all of the filters and special effects add-ons. .

One major problem with this dichotomy between the two versions of Lightroom is that if you started with the original v1.0 and then upgraded until you got to Classic, you’ve trained your brain to think in certain ways that don’t apply if decide to start using CC, and which you can easily trip over an when you do.

Classic keeps track of your images using a database file known as a “catalog” This database stores all of the information about what images have been added, what adjustments have been made to each one, and metadata like keywords or image ratings. You can have one catalog file for all your images, or you can have multiple catalog files.

Your image files are not stored as part of the catalog. When you add images, Classic adds a link to the original file location. You can also optionally have it copy new files to a designated location but even if you do this, the images are only pointed to by the catalog, not contained in the catalog.

Earlier versions kind of encouraged the use of multiple catalog files because the program had a tendency to become sluggish as more images were added to a catalog, but it’s been many years and many program revisions since this was an issue. With the modern version, I’ve got catalogs w with upwards of 60000 images and you don’t see slowdowns unless you do something like edit the keywords for all the images at once, or if you force it to rebuild image previews, but those slowdowns are predictable and expected.

One really goofy thing is that Adobe doesn’t allow you to create or use catalog files based on a network volume. The images, yes, the catalog files, no. The reason usually cited is that it uses SQLite as its database engine. SQLite is an SQL database library which you can embed directly into your program and therefore avoid the need to have another database installed like SQL Server or MySQL. However, SQLite isn’t designed to be multi-user.

The problem is that Adobe is erroneously equating “network-based file” with “multiuser” which are two entirely different things. It’s reasonable to decide that you don’t want to support multiuser functionality where different users on different stations on the network would all be accessing the file t the same time. However, if you’re worried about that, it’s very easy to restrict a network-based file to being used by one user at a time, I it’s a mystery as to why Adobe has a blind spot on this. But I do know it’s a pain in the butt because it means I can’t keep my catalogs on my NAS box.

With the new “Lightroom” it’s all designed around “the cloud” and it leaves behind the notion of having multiple catalogs or having your files located wherever they happened to start out. There is a single catalog file and when you add images, they’re copied to your local image cache and then synced to your Adobe Lightroom cloud account so they’re shared with your other devices using Lightroom CC.

Lightroom is superficially similar to Classic if you cross your eyes and don’t look too hard. A lot of old features are gone, like printing contact sheets, creating web galleries, or accessing photo sharing sites. What’s left over is the ability to browse through your catalog and do basic editing tasks.

If you have images you don’t want synced to the cloud, do not add them into your Lightroom CC catalog.

Conversely, if you’re primarily interested in syncing photos to the cloud across multiple devices, and won’t miss the various features of Classic, Lightroom may be what you want. But before you start trying to switch, I strongly urge you do research everything to make sure it’s what you want.

Lightroom CC has the ability to import Lightroom Classic catalogs. On The Mac, it also has the ability to import images from the Apple Photos app library (aka iPhoto). However, if you’re already into the Apple ecosystem and are using iCloud for photo sharing, I don’t really see any strong reason why you’d want to switch

In the last few episodes, we’ve talked about how GEM’s event processing model could have been a bit better, and how it could have better facilitated more cooperation in the cooperative multitasking environment. Then we discussed how the event handling changed a bit under MultiTOS when there was preemptive multitasking.

This time, we’re going to talk about how GEM AES defined and managed GUI elements like windows, buttons, text boxes, and so forth. As we have been doing, we’ll continue to compare GEM to how Microsoft Windows does things.

And once again, to be clear, I’ve chosen Windows to compare against not because I think it’s the standard by which everything else should be judged, but rather because it first came out about the same time as GEM, and because it’s familiar to the greatest number of people.

If you aren’t reasonably familiar with programming for Microsoft Windows, and you haven’t read the previous entry in this series, you might want to do it now. In particular, make sure you’ve read the “What is a Window Class” sidebar.

GEM AES Lacks Consistency

Consistency is an important foundation of how Microsoft Windows works, going all the way back to v1.0. Every UI element is defined by a window class, and they all follow the same basic strategy for how they’re created, how they process events, and how they’re used as components of a greater whole. The really important thing, ultimately, is that everything in Windows works this way. Every UI element, from menu bars or menu items to buttons, combo boxes, or whatever else, is either an object defined by a window class, or is managed by such an object. This means everything works in a consistent manner. You don’t have to learn one set of rules for one part of the user interface and a different set of rules for something else.

By comparison, perhaps the biggest design flaw about GEM AES is how it lacks consistency in the way its UI (user interface) elements are defined, how they work, and how they’re put together to create a complete user interface for an application. GEM doesn’t have anything like windows classes or a single, unified approach to everything. There are basically three different ways to do things.

  • Windows
  • Dialog Boxes
  • Menu Bars

Well, maybe it’s really more accurate to say two and a half. There’s some overlap in the way dialog boxes and menu bars are defined, but also some very fundamental differences in how they’re used.

Overall, the GUI features of GEM break down into two categories, which we’ll call The Elements and The Windows.

The Elements

First let’s talk about the category we called The Elements.  We’re talking about User Interface (UI) elements like buttons, check boxes, list boxes, editable text fields, and so forth.  These simple UI elements are defined via a simple data structure known as an OBJECT.  That’s an unfortunate choice of name by modern standards, but it was applied a few years before object-oriented programming really started to become much of a thing outside of computer science labs.

These elements are normally used in groups, not individually.  Such a group might be used as a dialog box, or a menu bar.

We won’t get into the minute details here, but let’s go over some of the basics of the OBJECT structure.  It was fairly small, just 24-bytes, as you can see below. You can probably guess the function of most of the fields from the names.

typedef struct
   int16_t    ob_next;   /* The next object            */
   int16_t    ob_head;   /* First child                */
   int16_t    ob_tail;   /* Last child                 */
   uint16_t   ob_type;   /* Object type                */
   uint16_t   ob_flags;  /* Manipulation flags         */
   uint16_t   ob_state;  /* Object status              */
   int8_t     *ob_spec;  /* Type specific data pointer */
   int16_t    ob_x;      /* X-coordinate of the object */
   int16_t    ob_y;      /* Y-coordinate of the object */
   int16_t    ob_width;  /* Width of the object        */
   int16_t    ob_height; /* Height of the object       */

To combine multiple UI elements into a larger, more complex UI structure like a dialog box, you used an array of OBJECT structures, also known as an OBJECT tree.

The first three fields of an OBJECT were used to create a hierarchy for items within the tree, such that certain objects could contain other objects.

The ob_type field specified what sort of UI element was represented. There were about 15 or so standard types which included simple UI elements like “button” or “editable text field”. This field not only indicated what the element was supposed to look like, but also how user interaction should be managed. Other fields contained flags that would indicate differences in appearance or behavior, like if the element is selectable, or if it was the default button, and so forth. There were other fields to hold the current object state, and of course, basic details like the object’s location and size.

Some object types required extra data like text strings or a bitmap. Extra data like that was stored elsewhere and pointed to via the ob_spec field.

Note that the OBJECT structure contains no pointers to code of any kind, like a message handler.

Such an array of OBJECT structures, along with the text, bitmap, or other data that goes with it, is known as an Object Tree, and more generally as a Resource. An individual resource might be part of a larger collection of resources loaded from a Resource File by the program at startup time.

Windows Also Has Resources

In Windows, “resource” is a much broader concept than with GEM, but one similar aspect is that a Windows resource file can contain definitions of UI structures like a dialog box, made up of a list of the individual UI elements required.

In GEM, the resource contains the actual data structures for the UI elements, but in Windows, it contains just a list of the parameters required to create each element. And although Windows UI elements have code associated with them, the resource does not contain that code.

In order to distinguish one type of UI element from another, the resource uses the name of the element’s window class. If it’s not a standard type, it’s presumed the application will load the appropriate library or otherwise initialize the window class before the resource is referenced.

This means that Windows can benefit from a relatively compact and simple description of the UI elements required, yet also allow the code for managing those elements be as simple or as complex as they need to be.

GEM AES Objects Are Just Data

The OBJECT structure defines what an individual element is supposed to look like, sort of. That is, it tells GEM, “I’m a button. Draw whatever you think a button should look like“.

The OBJECT structure also defines what an individual element is supposed to do, sort of. That is, it tells GEM, “This is a button. When the user interacts with me, do whatever sort of actions you think a button should do.

Ultimately, in either case, because the OBJECT is just data, it really has no control over the final result. There has to be some code to interpret the OBJECT and make sense of it all. In GEM, this is done by the AES forms library and object library. The forms library is responsible for managing complete structures like dialog boxes, while the object library is responsible for manipulating or drawing UI elements either individually or as a group.

Under Windows, there is nothing that closely corresponds to the GEM AES forms or object libraries. The necessary code for UI elements to do their thing is specified when the window class for that each type of element is registered with Windows. So, each UI element is ultimately a reference to a block of code that knows how to create and display the element, and how to deal with any user interaction. And all of the basic “built-in” UI elements like buttons, checkboxes, etc., are defined in their own library, separate from the rest of Windows, so that even Windows ends up using them in the same way as regular applications.

Showing A Dialog Box

To do a dialog box in GEM, you call the AES form library form_do function, in effect saying , “Here’s a list of UI elements. Draw it, monitor the user’s interaction, and tell me what happens after it’s all over.

The form_do function calls the object library function objc_draw to draw the UI elements specified in the resource tree passed to it, then it monitors the user’s interaction with those elements until the user hits an item with the mouse that is marked as an exit or touchexit item. At that point, control returns to the application.

But that doesn’t mean the dialog box is finished. Now the application has a chance to find out what the user did, by accessing the OBJECT structures and checking the various bits of status information. Depending on what it finds, the application has the option of updating the object tree in some fashion.  It might disable a button, clear a checkbox, or maybe update a list of selectable items.  Then once all that’s done, it can call form_do again for another round of interaction. Eventually, it can call other functions that signify the end of the dialog box, which will release the screen, send redraw messages to whatever was underneath, etc.

It should be clear that for anything other than very simple dialogs, you end up writing a lot of custom code that is unique to that specific dialog box. And all that still assumes you’re using only standard, vanilla UI elements. If you need any customization at all, you probably need to avoid calling the AES form_do function and instead, create your own block of code that does more or less the same thing, plus whatever custom functionality you require.

With Windows, creating a dynamic, interactive dialog is a much more simple process. You simply identify which events will require special attention, and you write handlers for those specific events. For example, let’s say that clicking an item in a list box should make certain buttons elsewhere in the dialog become enabled, disabled, or selected. All you have to do is attach a piece of code to the “item selected” event, and have that code configure the buttons as needed.

This is much simpler, yes?

Dialog Boxes Aren’t Windows, They’re Object Trees

In Windows, a dialog box is just another kind of window. It uses the same exact event processing model as anything else. In most cases the only significant difference for a dialog box is that the window is marked as being modal, meaning that you have to dismiss it before things like mouse events or keyboard events will be given to other windows. And even that is optional.

In GEM, a dialog box isn’t a regular window. Or any other kind of window for that matter. It’s a completely different animal. Instead of being a window, a dialog box is essentially a list of objects arranged in a hierarchal fashion, an object tree as we discussed way back towards the start of this article.

A dialog box object tree will probably start with a G_BOX rectangle object used as an overall container.  Walking the tree from there, you’ll find text label objects, button objects, more G_BOX objects, editable text field objects, and other such UI elements.

A dialog box is typically defined by a resource tree within the program’s resource file. It could also be generated at runtime programmatically, although this would mostly be an exercise in masochism unless your program’s main function was being a resource editor.

To manage the user’s interaction with a dialog box, the AES provides the form_do function. This function uses a specialized event handler loop that knows how do things like navigate the link list of object structures in the resource tree to figure out which button was clicked on, or which editable text field, etc.

When the user performs some action that indicates the dialog box is finished, the form_do function exits. For most dialog boxes, that’s the end of the process, but more sophisticated ones might update something and jump back into form_do again.

Menu Bars

The next part of the GEM AES trifecta of different ways to do things is the menu bar.  Menu bars are object trees, like a dialog box, but they’re managed by the system fairly automatically.  Once you’ve told GEM “Here’s my menu bar! the AES will display it at the top of the screen and allow the user to interact with it.

Under MultiTOS, the menu bar shown at any given moment is that which belongs to whatever application owns the top-most window on screen.

Once the menu bar is in place, things are fairly automatic as far as your program is concerned.  You don’t have to do anything except wait for the user to select a menu item. When that happens, the AES sends your application an MN_SELECTED message which indicates which item was selected.

Your program can dynamically change certain things about the current menu, like individual items being enabled or disabled, or you can update the item text, as long as the object tree for the menu bar doesn’t change when the user could be interacting with it.

Menu Bars Aren’t Modal, Except When They Are

Normally, one thinks of interacting with a menu bar as being a non-modal operation, and in the overall broad sense that’s true. But there are parts of the process that are modal. For example, before drawing a menu, GEM AES saves the appropriate portion of the screen to an offscreen menu.  When this process is done, it restores the original screen contents.  This is done to eliminate the need to send redraw messages to whatever was underneath the menu.

But it’s also a modal operation.  That is, AES locks down the screen when the user interacts with the menu bar.  This includes blocking any application that is currently waiting for an event library call to return.  This normally has little impact, but it can affect programs which are attempting to maintain some sort of live, animated display, as this will probably freeze when the user interacts with the menu bar. At least, if they’re doing it right when they refresh the window for the animation.

Customizing Menus

Although a menu bar is a standard object tree, you can’t get away with placing any sort of OBJECT into a menu. While you’d probably not expect things like editable text fields to make much sense, certain more basic things like icons don’t really work either.  At least not as you’d expect.

When I was working on the 2nd revision to my FONTZ! font editor application, I wanted to be able to have hierarchal submenus in my menu bar.  The first problem I had was that the resource editor programs didn’t understand that idea.  But I managed to put it together.

I managed to get it to draw and interact with the mouse properly.  It didn’t happen automatically, but I did it using only standard AES & VDI functions.  I had to save the screen area underneath the submenu myself, and restore it afterwards.

But even after I got it to draw and track with the mouse, the submenu didn’t generate a message when the user selected an item.  Eventually I ended up doing it by tracking it myself and sending myself a MN_SELECTED message, instead of expecting GEM to do it.

Later revisions of GEM would have support for such submenus built-in, but as far as I know I was the first to do it using 100% legal AES functions before that.

Menus & Event Processing

In our last installment, we talked about how GEM’s event processing could sometimes, at least theoretically, mean that your program received and/or processed messages in a different order from which they occurred.

Menu item selection is a good example of how this can happen.  Suppose a program has a tool bar at the top of the window and it contains a “Quit” button.  So what happens if a user goes into the menu bar, selects the “Save” item, but when the menu goes away the mouse is right on top of the button and it gets clicked too? These might get separated, but it’s possible for both events to be returned by evnt_multi at the same time.

So now the program returns from evnt_multi with a message event for MN_SELECTED and a mouse event for the button click.  The program has no idea which event happened first, so it could SAVE then QUIT, or it could just QUIT and never process the SAVE request.

That’s probably a worse-case scenario, but it’s not hard to imagine other situations where things would be done out of order.

The Windows

The last point on the GEM GUI triangle is the basic application window.

Windows In GEM Aren’t Made Of Objects

Remember earlier when we talked about telling GEM, “Here’s a list of UI elements. Draw it, monitor the user’s interaction, and tell me what happens after it’s all over..”

Well that only applies to menu bars and dialog boxes. Windows aren’t a type of OBJECT, nor are they a resource tree of multiple OBJECTs. Windows are just… windows. They are essentially monolithic entities unto themselves.

You create and open a window by specifying a collection of flags that indicate if individual window elements like scrollbars, close buttons, etc., should be present or not.  You would think such elements would be part of the standard collection used for dialog boxes etc., but no. You also specify things like the position and size where it should go on screen.

When the window is created, you get back an integer window handle that is used thereafter to refer to that window. GEM keeps track of which window handles belong to each application.

But GEM doesn’t really manage the whole window. It tracks the user’s interactions with the outer perimeter, the frame, but not what happens in the window’s client area.

GEM AES Windows and Events

Most window-related events are pretty easy to deal with, but some require a lot of code to handle properly.  There are two reasons for this. First, GEM puts most of the burden for dealing with things like scrollbars onto the application to figure out. Second, because of the way AES handles, or rather doesn’t handle, screen coordinates within a window. You always deal with global screen coordinates.  This connects with the VDI’s lack of ability to do any sort of coordinate system translation,  as we discussed in an earlier episode. 

You get mouse events for things that happen in a window’s client area, but the information you get from the event won’t directly reference the window at all.  It’ll be up to the program to determine which window was at the mouse position by calling the wind_find function. The possibilities include the desktop as well as any open windows belonging to the application.

Once you determine that a mouse event happened inside one of your application’s windows, then you’ll probably have to translate the mouse coordinates from global screen space into something relative to the window’s client area.  This is done using the wind_get function.

Then you’ll have to factor in any offsets represented by the window’s current scroll bar positions. That last part is further complicated by the fact that scrollbar positions and sizes in GEM are always set to a range of zero to 1000, regardless of whether or not you have a 4oo pixel window showing a 410 pixel document or a 10000 pixel document.

And if your application has a “zoom factor” it can apply to what it displays, well, then you’ll have to factor that in at some point.

After all that, you’ll have a set of coordinates relative to the “document” being displayed and you can take whatever action is indicated by the mouse event.

Other than mouse events, the main thing that gets complicated is a redraw message.  When your program gets a redraw message, it will indicate the overall rectangle of the “dirty” area that needs to be redrawn.  In screen coordinates, of course, so you’ll have to jump through the same hoops we mentioned a paragraph or two ago to get an offset for your window’s client area.

And then you can’t just redraw the rectangle in the message.  Turns out, that is the overall bounding rectangle of a list of smaller “dirty” rectangles, which may or may not be contiguous.  You’ve got to use wind_get to get the first such rectangle, set the clipping and redraw it, then repeat the process until wind_get tells you that you’ve reached the end of the rectangle list.

And of course, you’ll have to be translating the coordinates back and forth between global screen space and window client space as needed.

By comparison, when a Windows UI element gets a WM_PAINT message, telling it to redraw something, the (0,0) position of the coordinate system is, by default, set to the top left corner of the element’s client area, with the scrollbar position already factored in. Plus, the graphics library’s clipping is already set to the dirty area being redrawn.  All your paint function has to do is a straightforward redraw of the window contents. If there are multiple “dirty” areas, it’s no big deal because you get a separate WM_PAINT message for each.

Mixing Objects & Windows

The AES manages the process of drawing menu bars and tracking user interaction, once you give it the address of a menu bar resource. It does it for a resource arranged as a dialog box when you call the form_do function. But if you want to use OBJECTs and resource trees in a regular window, your application is going to have to watch over them and make it work. You can’t call form_do because that would block off access to anything other than the object tree. Likewise if you want a dialog box to have additional functionality beyond what GEM AES normally provides. In either case, your program has to supply the code to capture events, traverse through a tree of OBJECT structures, figure out how to apply the events to the OBJECTs.

Mostly, you’ll be replicating what GEM AES does, just so you have the ability change one or two things somewhere. Essentially, it’s going to have to implement the functionality of the form_do function and integrate that with whatever other event processing your window may require. Once developers got sufficiently ambitious that they were trying to do this regularly, Atari released a cleaned-up version of the source to the form_do function to make life easier.

Unlike Windows or other systems, there is no way in GEM for a program to create new types of UI element and drop them into a dialog box or menu bar alongside the predefined ones, mainly because GEM wouldn’t know what to do with an unknown ob_type value. It wouldn’t know how to draw it, or how to handle events for it. If you wanted to manage those details for yourself, then you could provide your own code to do it. Along with the code required to handle all the regular pre-defined object types that might be mixed in there too. Basically your code is all or nothing when it comes to UI elements.

Next Time Around

Our next AES-related article will talk about the scrap library, aka the clipboard. See you then!


Other Articles In This Series