I keep seeing posts on Facebook and stories on websites about how Microsoft Windows 10 is getting installed on people’s system without their permission.

Nope. Wrong. Ain’t happening.

You may have heard that Microsoft lost a lawsuit about this, or that they were “fined” $10000 over it.

Nope. Wrong. Didn’t happen.

Yes, there was a lawsuit. Microsoft was sued by a California real estate agent in over this and settled out of court for $10k. They didn’t lose the suit in court, and there was no fine.

The bottom line is, if Windows 10 is getting installed on your system, then it’s absolutely because you gave permission. The problem is that people don’t pay attention and that granting that permission may have already happened long before the upgrade actually happened.

The Windows 10 “Update”

A lot of the controversy here is because before the upgrade actually takes place, Windows shows the user a message, like this:

https___blueprint-api-production_s3_amazonaws_com_uploads_card_image_129736_Windows10UpgradePrompt2

If you actually bother to read it, the message is very clear. The Windows 10 upgrade is already scheduled to occur on the date shown. This update is scheduled automatically because of the Windows Update settings on that machine. Settings which you, at some point, had to say “yes” to.

But I Closed The Window Without Clicking Upgrade!

Some people are making the argument that clicking the red “X” at the top right corner should close the window without taking any action.

The problem with that argument is that’s EXACTLY what it does. The upgrade is ALREADY scheduled and simply closing the window doesn’t UNSCHEDULE it. In fact, the window clearly gives you an option to CANCEL the upgrade.

It’s not really Microsoft’s fault if you can’t be bothered to read the message and click the link to cancel if you don’t want to upgrade. The problem really boils down to the fact that some people are just too lazy to read a few lines of text.

The real estate agent who brought forth the lawsuit claimed to “had never heard of Windows 10” so it seems pretty obvious we’re not really dealing with a power user here. In fact, it’s rather hard to take such a claim seriously given the amount of media coverage each new version of Windows sees.

Her complaint was that the upgrade “rendered her computer unusable”. However, that’s as much detail as we get about her computer. There’s been nothing in any version of this story to indicate that there was actually any genuine problem after the upgrade. My guess is that the upgrade went perfectly fine, and that she was simply confused by the different colors and menu choices.

Not Microsoft’s Fault, But They Could Have Done Better

Arguably, Microsoft should have realized that people are lazy and that many wouldn’t read the message. And clearly, this real estate agent isn’t a particularly advanced user. When you combine these things, it’s practically guaranteed that there will be problems.

With that in mind, it’s probably a mistake for Microsoft to install the upgrade without any last-minute “Yes, please do it now” button, and in fact they’ve changed things so that the upgrade is not scheduled to happen automatically. Now the window comes up and gives you the option to upgrade, rather than giving you the chance to cancel an already-scheduled upgrade.

https___blueprint-api-production_s3_amazonaws_com_uploads_card_image_129781_Windows_10_Upgrade_Notification

In an earlier article (Visual Studio Source Control With Git & Visual Studio Online) we discussed how to add an existing Visual Studio to source control using Git, including syncing with a remote Git repository on Microsoft’s Visual Studio Online website.

While perhaps not as common a choice as Dreamweaver or some other tools, one can use Visual Studio to manage non-ASP.NET website projects. One reason you might want to do this is to take advantage of Visual Studio’s built-in source control features. Dreamweaver has some basic source control features as well but relies more on external tools to manage everything. There are advantages and trade-offs either way, so it really kind of comes down to personal preference.

I recently decided to create VS projects to manage some WordPress plugins and themes, and I discovered that the process for getting everything going with Git and Visual Studio Online is a bit different and not as straightforward as one might like.

Install The Git Command Line Tools

Once again, since we may have some new people in the audience, before we do anything else, let’s make sure the Git command line tools are installed. Go to the View menu and select the Team Explorer item near the top. This will open the Team Explorer view if it’s not already open. Find that window and click the Home icon from the icons at the top, then click Settings in the next screen.

Once you’re in the Settings screen, near the bottom there should be a section labeled Git, and one of the links underneath it should be Install 3rd Party Tools. Click that and follow the prompts.

How To Git ‘er Done

There are many possible variations on the whole process that will work, but there are also some things that won’t work as you might expect if you have been using source control with ASP.NET web projects or non-web projects. Through trial and error, the method presented here seems to be the easiest and quickest method I’ve found.

You’ll want to start with an empty workspace in Visual Studio, no open project or solution.

This method starts by creating a local repository. Go to the Team Explorer window (select it from the View menu if it’s not already open), then to the Connect section by clicking on the electrical plug icon, or by clicking on the section title underneath the icon bar, which will bring up a menu with choices for the different pages of the source control management.

008

Once you’re on the Connect page, then under Local Git Repositories, click “New” and then select the path where your project will be located. At this point that should be an empty directory.

If you want to use an existing directory with files already in it, move them to a temporary location for now, because Git won’t allow you to create a new repository in a directory that already has files in it. (No, I dunno why. Seems dumb to me too, but I’d imagine there’s some reason that hasn’t yet occured to me.)

Once you have the path selected, click the Create button.

001

Now that the Git repository has been created, make sure it’s selected by double-clicking it in the list.

If you had moved the files from the specified directory out to a temporary location, now is the time to move them back. If it’s an empty project so far, then I recommend creating a simple text file named something like {project}.txt (with your project name) so that we can add it to the repository and kickstart things.

Now go to the File menu and choose Open->Website. Specify the same folder where you created the new Git repository. You should get a new solution with a single project/website listed, like this:

002

At this point, you should save your project. Select “Close Solution” from the “File” menu. You’ll get a message asking if you want to save. Save the solution file to the same folder where you created the new Git repository. You’ll probably want to also set the solution filename to “{project}.sln” or something else that makes more sense than “localhost_42341” or whatever other random name was created by Visual Studio.

Saving the solution file to the repository folder is important. If you save to a different folder, then the solution file itself won’t be able to be added to the source control project. That would be bad.

Once the solution has been saved and closed, reopen it. In the Solution Explorer, note the little plus signs next to the filenames. This indicates that the file will be added to source control with the next commit operation. However, since we just added an entire project, there may be files in the list that we didn’t want to include, so we’ll want to review everything and remove anything we don’t want to include.

Go to the Team Explorer window again, and click where it says “Connect”. A popup menu will appear. Select “Changes“.

003

At this point you’ll see that the “Included Changes” section includes all of the files in the project directory. However, there may be files that you do not want included in source control for one reason or another. Review the list of files and if you see anything that should not be included, click on it and drag it down to the “Untracked Files” section at the bottom. You can use control-click or shift-click to select multiple items at once before dragging them.

Now we’re ready to make our first commit to the repository. Scroll back to the top of the window and click in the yellow edit box where it says “Enter a commit message”. Enter something relevant like “Initial check-in of project”.

004

Once you’ve entered your message, click the “Commit” button. If all goes properly, you’ll get a message like “Commit eeaa0e65 created locally. Sync to share your changes with the server.

Before we can sync, we need to specify the remote repository with which we’ll be syncing. If you haven’t already created the project on Visual Studio Online, now is the time. Refer to the earlier article (Visual Studio Source Control With Git & Visual Studio Online) if you need information on doing that.

Once you’ve created the remote repository you’ll need the URL. You can get this from the “Code” section of the project on the website:

005

Go back to Visual Studio, and click on where it says “Unsynced Commits” at the top of the Team Explorer window. Then enter the URL in the yellow box under “Publish To Remote Repository“.

006

Click the “Publish” button and it will start uploading your files from the local repository to the remote server. This may take awhile, depending on how many files there are and your connection speed. Eventually you should get a message telling you that the publish is completed.

Now, when you commit changed files to the local repository, you can sync to the remote server by hitting the “Sync” button after the commit operation finishes.

That’s pretty much it as far as getting everything working with source control is concerned. Have fun!

In part 1, we talked about the basics of how GEM VDI works and how that applies to the concept of VDI functions being passed from an application to a device driver.

This time around, we’ll talk about the printer driver kit that Atari sent out to selected developers. Printers were by far the most commonly supported device, and also had perhaps the greatest variety in technology.

Before we talk about the printer driver kit, let’s take a look at the state of technology for printers back in the mid-80’s.

The Printer Market At The Dawn Of Time (the mid 80’s)

Today, you can walk into a store and for $200 or so, maybe less, you can buy a fairly decent color laser printer that prints 10 pages a minute or more at resolutions upwards of 1200 dpi. To those of us who survived using computers in the mid’80’s, that is simply insane. You get so much more bang for the buck from printers these days that some people buy new printers instead of replacing toner cartridges.

In the mid-80’s, the printer market was much different than it is now. Aside from the differences in technology, printers were relatively much more expensive.  A basic 9-pin printer in 1985 would cost you $250 or $300. That’d be like $500-$600 today. You could literally buy a half-dozen cheap laser printers today for what it cost for a good 9-pin dot matrix printer back then. A good 24-pin printer would set you back $500 or more.

Laser printers, in 1985, were about $2500 for a basic Hewlett Packard LaserJet or similar model. Apple introduced the LaserWriter in March with a price tag of almost $7000. Fortunately, more and more manufacturers were entering the market, and prices were starting to drop. I paid about $1300 for my first laser printer in late ’86, and that was as cheap as they came back then. It was compatible with PCL 2 (printer control language version 2) which meant that most drivers for the HP LaserJet would work with it.

Today, the typical printer found in most people’s homes is an inkjet dot-matrix printer. That kind of printer wasn’t really a mainstream thing yet in 1985. The first truly popular model would be the HP DeskJet in 1988.

Graphics Printing Was SLOW!

Today, most printer output, other than speciality devices like receipt printers, is done using bitmapped graphics. The printer driver on your computer builds an image in the computer’s memory, and then when the page is complete, sends it to the printer. This gives the application and printer driver nearly complete control over every pixel that is printed.

However, in 1985, sending everything to the printer as one or more large bitmaps didn’t work so well, for a couple of reasons. First was the fact that sending data from the computer to the printer was fairly slow. Most printers connected to the computer via a Centronics-style parallel data port, which typically used the system’s CPU to handshake the transfer of data. Typical transfer speeds were rarely more than a couple of dozen kilobytes per second, even though the hardware was theoretically capable of much faster speeds.

Even though the data connection was fairly slow, the main bottleneck in most cases was the printer’s ability to receive the data and output it. Most printers had no more than a couple of kilobytes of buffer space to receive data, generally no more than about one pass of the print head when doing graphics. It was the speed of the print head moving back-and-forth across the page that was the ultimate bottleneck.

A popular add-on in those days was a print buffer, basically a little box filled with RAM that connected in-between the printer and the computer. This device would accept data from the computer as fast as the computer could send it, and store it in its internal RAM buffer. Then it would feed the data out the other end as fast as the printer could accept it. The print buffer could accept data from the computer more quickly than the printer could, and assuming it had enough RAM to hold the entire print job, it would free up the computer to do other things.

But even with a print buffer, if you had an impact dot-matrix printer and wanted to produce graphics output, you simply had to get used to it taking awhile to print. For those with bigger budgets, there were other options. Laser printer manufacturers started to make smarter printers that were capable of generating graphics in their own local memory buffers. This was generally done using what we call a Page Description Language, or PDL.

Page Description Languages

With a PDL, instead of sending a bitmap of a circle, you would send a series of commands that would tell the printer where on the page to draw it, what line thickness to use, how big it should be, the fill pattern for the interior, etc. This might only take a couple dozen or perhaps a few hundred bytes, rather than several hundred kilobytes.

One of the most capable and popular PDLs was PostScript, which was introduced to the world with the release of the Apple LaserWriter printer. PostScript was actually a programming language, so you could define a fairly complex bit of output and then use it as a subroutine over and over, varying things like the scale factor, rotation, and so forth. PostScript also popularized the concept of using outline scalable fonts.

The downside to Postscript or other PDLs was that the printer needed a beefy processor and lots of RAM, making the printer fairly expensive. Often more expensive than the computer you used to generate the page being printed.The Apple LaserWriter actually had a faster version of the Motorola 68020 processor and more memory than early models of the Mac computer.

The other downside was that even if you’re printing a couple of dozen pages everyday, the printer is actually sitting idle most of the time. Meaning that extra processing power and RAM isn’t really fully utilized.

Graphics Output On A Budget

Back in the 8-bit days and early PC days, most people didn’t have thousands of dollars to drop on a laser printer. If you had a basic 9-pin dot matrix printer, it had relatively primitive graphics and it was fairly slow to output a page using graphics mode. Most of the time you made a printout of something text-oriented, it used the printer’s built-in text capabilities. Basic printing modes were fast but low-quality, but more and more printers introduced a “letter quality” mode which was somewhat slower, but still much faster than doing graphics output.

However, the whole situation with printers was on the cusp of a paradigm shift. RAM was getting cheaper by the day. Computers were getting faster. The quality of graphics printing was improving. And, perhaps more than anything, the release of the Apple Macintosh computer in 1984 had whetted the market’s interest in the flexibility of bitmapped graphics output, and the subsequent release of Microsoft Windows and GEM with similar capabilities had added fuel to the fire.

Being able to combine text and graphics side by side was the new target, even for people with basic 9-pin dot matrix printers, and even though it was often orders of magnitude slower than basic text output, people were willing to wait. And for higher-quality output, they were willing to wait a bit longer.

Printer Drivers In The Wild West

Today, when you buy a printer, you get a driver for Windows, maybe one for Mac OS X. I would imagine Linux users recompile the kernel or something to get things going there.  (Kidding!)  And once you install that driver on your computer, that’s pretty much all you need to worry about. You tell an application to print, and it does.

By comparison, back when the ST first came out, printing was the wild wild west, and getting your printer to produce output could make you feel like you were in an old-fashioned gunfight. Before GUI-based operating systems became popular, every single program required its own printer driver.

And then we have the fact that there were about fourteen billion different ways of outputting graphics to a printer. Even within the product line of a single manufacturer, you’d find compatibility issues between devices that had more or less the same functionality as far as graphics output went. Even with the same printer, two different programs might have different ways of producing what appeared to be the same exact result.

Back in those days, most dot-matrix printer manufacturers followed the standards set by Epson. For example, when Star Micronics came out with their Gemini 10x 9-pin dot matrix printer, it used most of the same printer codes as the Epson FX and MX printers. Likewise with many other manufacturers. Overall, there was often as much as approximately 95% compatibility between one device and another.

The problem was, most of the efforts towards compatibility were oriented around text output, not graphics. That is, the same code would engage bold printing on most printers, but the code for “Advance the paper 1/144th inch” used for graphics printing might be different from one printer to the next.  This was further complicated by the fact that printers sometimes differed somewhat in capability. One printer might be able to advance the paper 1/144″ at a time, while another could do 1/216″.

The one good thing was that in most cases it was possible for users to create their own driver, or more accurately, a printer definition file. For most programs, this was nothing more than a text file containing a list of the printer command codes required by the program. In some cases it was a small binary file created by a separate utility program that let you enter the codes into a form on screen.

The Transition To OS-Based Printing

The main reason every DOS application (or Atari 8-bit program, or Commodore 64 program, etc.) had its own proprietary printing solution was, of course, the fact that the operating system did not offer any alternative. It facilitated the output of raw data to the printer, but otherwise provided no management of the printing process.

That started to change for desktop computer users in 1984, when Apple introduced the Macintosh. The Mac’s OS provided developers with the means to create printer output using the same Quickdraw library calls that they used to create screen output. And it could manage print jobs and take care of all the nitty-gritty details like what printer codes were required for specific printer functions. Furthermore, using that OS-based printing wasn’t simply an option. If you wanted to print, you had to go through the system. Sending data directly to a printer was a big no-no.

One significant issue with the whole transition to OS-based printing was the fact that printer drivers were significantly more complex. It generally wasn’t possible, or at least not practical, for users to create their own.

Apple addressed the potentially murky driver situation by simply not supporting third party printers. They had two output devices in those early years, the ImageWriter 9-pin dot-matrix printer, and then the LaserWriter. It would be a couple of years before third party printing solutions got any traction on Macintosh.

When Microsoft Windows came out a short time later, it addressed the question of printing in largely the same way as the Macintosh, except that it supported a variety of third-party printer devices. 

When the Atari ST came out, the situation regarding printing with GEM should have been theoretically similar to the Mac and Windows, except for two little things.

First was the minor tripping point that the part of GEM responsible for printing (GDOS) wasn’t included with the machine at first. What was included was BIOS and GEMDOS functions for outputting raw data to the printer. As a result, application programmers ended up using their own proprietary solutions.

Second was the fact that even after GDOS was released, there were only a few printer drivers included. And Atari didn’t seem to be in any big rush to get more out the door. As a result, application developers were slow to embrace GEM-based printing.

GDOS Printing On The Atari

As far as I know, the first commercial product to ship with GDOS support included was Easy Draw from Migraph at the start of 1986, about six months after the ST was released, and about two months after Atari starting shipping machines with the TOS operating system in ROM rather than being disk-loaded.

Migraph included pretty much exactly what Atari had given them as a redistributable setup: the GDOS.PRG file which installed the GEM VDI functionality missing from the ROM, the OUTPUT program for printing GEM metafiles, and a set of GEM device drivers and matching bitmapped fonts. The device drivers included a GEM Metafile driver and printer drivers for Epson FX 9-pin dot-matrix printers and Epson LQ 24-pin dot-matrix printers.

Compared to most other programs, this situation had a significant drawback. This was not Migraph’s fault in any way. It was a GEM issue, not an Easy-Draw issue. So what was the problem? Well, basically it comes down to device support. The GDOS printer drivers supplied by Atari simply didn’t work with a lot of printers. They targeted the most popular brand and models, but if you had something else, you had to take your chances regarding compatibility. This was a major problem for users, not to mention something of a surprise.

If there’s any aspect of GEM’s design or implementation where the blame for something wrong can be pointed at Atari rather than Digital Research, it’s got to be the poor selection of printer drivers.

With a word processor like First Word, if your printer wasn’t supported by a driver out of the box, chances were pretty good you’d be able to take your printer manual and figure out how to modify one of the existing drivers to work. Or, maybe you’d pass the ball to a more tech-savvy friend and they’d figure it out for you, but one way or the other, you probably weren’t stuck without a way to print. Not so with Easy-Draw, or any other program that relied on GDOS for output. GDOS printer drivers weren’t simply a collection of printer codes required for specific functions. If there was no driver for your printer, and chances of that were pretty good, you couldn’t print. Period.

The GDOS Printer Driver Kit

When I was at Neocept (aka “Neotron Engineering“) and our WordUp! v1.0 word processor shipped, we included basically the same GDOS redistributable files that Migraph had included with Easy-Draw, except for the OUTPUT program which we didn’t need because WordUp! did its own output directly to the printer device. It wasn’t long before we started getting a lot of requests from users who had printers that weren’t supported, or which were capable of better results with a more customized driver.

We asked Atari repeatedly for the information necessary to create our own drivers. I dunno if they simply eventually got tired of our incessant begging, or if they thought it was a way to get someone else to do the work of creating more drivers, but eventually we got a floppy disk in the mail with a hand-printed label that read “GDOS Printer Driver Kit” that had the source code and library files we needed.

There weren’t really a lot of files on that floppy disk, so I’ll go ahead and list some of them here:

  • FX80DEP.S
  • FX80DATA.S
  • LQ800DAT.S
  • LQ800DEP.S
  • STYLES.C
  • INDEP.LIB
  • DO.BAT

That might not be 100% accurate as I’m going from memory, but it’s close enough. I think there might have “DEP” and “DATA” files for the Atari SMM804 printer as well, but it’s possible those were added later.

The “*DEP” files were the device-dependent code for a specific device.  Basically there was a version for 9-pin printers and one for 24-pin printers.  There were some constants unique to individual printers that should have been elsewhere.

The “*DATA” files were the related data, things like printer codes and resolution-based constants.

INDEP.LIB” was the linkable library for what amounted to a GEM VDI bitmap driver.

The STYLES.C file contained definitions for the basic pre-defined VDI line styles and fill styles.

The DO.BAT file was a batch file that did the build.

Figuring It Out

There were no instructions or documentation of any kind. That may have been why Atari was originally reluctant to send anything out. It took a little experimenting but eventually I figured out what was what. The idea here was that the bulk of the code, the routines that actually created a page from the VDI commands sent to the driver, was in the INDEP.LIB library. The actual output routine that would take the resulting bitmap and send it to the printer was in the *DEP file. By altering that routine and placing the other information specific to an individual printer into the DEP and DATA files, you customized the library’s operation as needed for a specific printer.

The ****DATA file would contain things like the device resolution, the printer codes required to output graphics data, and so forth. This included the various bits of information returned by the VDI’s Open Workstation or Extended Inquire functions.

The first drivers I created were relatively simple variations on the existing drivers, but fortunately that’s mainly what was needed. There were a ton of 9-pin dot-matrix printers in those days, and while many of them worked fine with the FX80 driver, some were ever so slightly different. Like literally changing one or two printer codes would make it work. The situation was a little better with the 24-pin printers but again there were a few that needed some changes.

The first significant change we made was probably when I created a 360 DPI driver for the NEC P-series 24-pin printers. These were compatible with the Epson printers at 180 DPI, but offered a higher-resolution mode that the Epson did not. I’ll admit I had a personal stake here, as I’d bought a nice wide-carriage NEC P7 printer that I wanted to use with the Atari. That thing was slower than crap but oh, gosh was the output good looking. At the time, for a dot-matrix impact printer, that is.

One thing that was confusing at first was that the startup code for the drivers was actually contained in the library. The code in the ****DEP.S files was called as subroutines from the v_opnwk and v_updwk functions.

Anatomy Of A GDOS Printer Driver, Circa 1986

The INDEP.LIB library (or COLOR.LIB for color devices) contained the vast bulk of the driver code. It contained all of the functions necessary to handle all of the VDI functions supported by the device. It would spool VDI commands until the v_updwk function was called. That was the call which triggered the actual output. At that point, it would create a GEM standard raster format bitmap and render all of the VDI commands which had been spooled up since the open workstation, or previous update workstation.

In order to conserve memory, the printer drivers were designed to output the page in slices. A “slice” was basically a subsection of the overall page that extended the entire width, but only a fraction of the height. The minimum slice size was typically set to whatever number of lines of graphics data you could send to the printer at once. For example, with a 9-pin printer, the minimum “slice height” would be 8 scanlines tall. If the horizontal width of the page was 960 pixels (120 dots per inch), then the minimum slice size would be 960 pixels across by 8 pixels tall. The maximum slice height could be the entire page height, if enough memory was available to the driver.

The driver would allocate a buffer for a slice, then render all of the VDI commands with the clipping set to the rectangle represent by that slice.  Then it would call the PRT_OUT function.  This was a bit of code in the DEP.S file that would output whatever was in the slice buffer to the printer, using whatever printer codes and other information were defined by the DATA.S file. After a slice was output to the printer, the library would clear the buffer and repeat the whole process for the next slice down the page.  For example, the first slice might output scanlines 0-95, then the next slice would do scanlines 96-191, and so forth until it had worked its way all the way down to the bottom of the page.

Once it got to the bottom of the last slice, the code in DEP.S would send a form feed code to the printer to advance the paper to the start of the next page.

This all may sound inefficient, since it had to render all of the VDI commands for the page over and over again, but the bottleneck here was sending the data to the printer so that didn’t really matter.

A Semi-Universal Printer Driver

Something I always kind of wanted to do, but never got around to, was creating a reasonably universal GDOS printer driver that stored printer codes and other parameters in an external configuration file that could be edited by the user. Or, perhaps, stored within the driver but with a utility program that could edit the data.

You see, the main part of the library didn’t have any clue if the printer was 9-pin, 24-pin, or whatever. So there’s no reason it shouldn’t have been possible to create an output routine that would output to any kind of printer.

In hindsight, that probably should have been the goal as soon as I had a good handle on how the driver code worked.

Next Time

Next time we’ll jump right into creating our driver shell.

Related Articles


« Previous Entries