September 10th, 2019 by Mike Fulton
Categories: Apple, iOS, iPhone

I remember when Steve Jobs and Apple first announced SIRI, the voice activated… assistant. Yea, let’s call her an assistant…. try to keep a straight face. It seemed pretty cool.

In the nine years since, Siri has come to be known as a somewhat less than precise and reliable means of telling your phone to play some music.

There have been few disappointments tied to the rise of smartphones on a par with the spectacular failure of SIRI to live up to the promise of that original announcement. Let’s have a look into things.

Speech recognition

The first big thing about Siri is that it does continuous voice recognition, listening for your command. In practice, this isn’t always as reliable as one would like. However, the real problem is that Siri doesn’t really do much besides initiate media playback or do web searches. We were promised more.

It Ain’t Privacy

Sure, there ae probably a few neat tricks Siri could do if it didn’t mean someone would scream about “privacy“. But that’s not the biggest part of this problem.

The problem is that SIRI has no real ability to parse text more complicated than a simple verb-object command. So you can say “Play Beatles” and Siri can figure out that “Beatles” means a particular artist and/or song, and then it can perform the “play” action on it. But it has no ability to decipher more complex commands and it has no ability to track a conversation and keep a context of what’s being said. When you get right down to it, Siri’s text paring isn’t as advanced as a 1980 8-bit computer text adventure game like Zork.

Imagine a more interactive scenario.

You: “Hey Siri, what time is that movie playing?”

Sir recalls that you asked about the new Star Wars movie earlier in the day and looks up local showtimes.

Siri: “Star Wars: The Rise of Skywalker is playing at the AMC Marina Pacifica at 8:45PM and at 9:40pm, and at Regal Lakewood at 7:55pm. Do you want tp purchase tickets?”

You: “Is that in 3D?”

Siri: “Only the 7:55pm Regal Lakewood showtime is 3D”

“OK get two adult tickets for the 3D show.”

That would be like a thousand times more useful than anything Siri has done since day one, and it’s all done by adding the ability to keep track of the conversation to a very minimal degree.

And there was no personal information in this exchange. So tell me, Apple, why wasn’t Siri upgraded to do this like 8 versions ago? Do we need to call the old Infocom Zork programmers and get them on the team?

Not Just Appple

By the way, while I’m calling out Apple and Siri specifically here, most of this applies equally to Amazon/Alexa, Hey Google, and Microsoft/Cortana. Some o them do better than others at certain things, but none of them is at good as Zork. At least you can kill a grue in Zork.

September 20th, 2013 by Mike Fulton
Categories: Apple, iOS, iPhone, Mobile

The new version of iOS is out this week and it’s quite an update.  There are a lot of changes in the way it looks as well as the way it works.  Reaction to the purely aesthetic changes will of course vary from person to person, but there are a variety of changes to basic operations that I don’t think hardly anybody will care for.  Apple seems to have forgotten what “user friendly” means in a few cases.

These are just some initial, gut reactions to iOS7 after having installed it on my iPad 3.


One change that is mostly aesthetic is the way your background wallpaper works.  Previously, the background was always a static, non-moving image.  But in iOS 7, Apple has introduced a new parallax scrolling effect that is designed to give everything more of a 3D look.  Basically, as you move the device around, the background shifts position to create the illusion that it’s on a separate plane from the icons and text in the foreground.

Visually, this can look quite nice.  However, the idea has a very significant flaw.  When you go to select an image to be used as the background, you’re taken to a screen to “Move and Scale” the image.  So far this is just like it was before.  However, you’ll notice right away that the image is already zoomed in about 35-40% by default.  Next you’ll notice that if you try to zoom out to see the whole image, it won’t let you.

Clearly, the reason for the default zoom level is so that there’s room for iOS to bounce the image around for the background parallax effect. However, as a practical matter it’s just plain frustrating for the average user who simply wants to use their favorite picture as a background.

There is a way to turn off the parallax feature.  Or maybe you’re just cranking down the volume on it, I’m not sure, but you have to go a few levels down into the SETTINGS screens to do it.  Once you do that, the default zoom level is somewhat less, but I was still unable to set the picture at the size I wanted.  After futzing around with it for a few minutes, I finally gave up, loaded Photoshop Touch, then made a larger version of my wallpaper image with big white borders on all four sides so that I could zoom into the portion I wanted.

Then I discovered that pressing the buttons for “SET AS LOCK SCREEN”, “SET AS HOME SCREEN”, or “SET AS BOTH” didn’t appear to do anything.  There was no change in text highlighting or any other indication on screen that any selection had been made, and even clicking where it said “CANCEL” didn’t do anything.  I eventually got out of the screen by pressing the hardware button on the top of my iPad.


Before iOS 7, the Email app would load messages extending a certain period into the past, depending on your settings.  I had mine set for 1 week, meaning that when I loaded the app it would retrieve message headers dated within the previous week.  You could also specify the maximum number of messages it would load by default.  You could always load more/older messages by pressing the “Get More Messages” button.

All those options seem to have disappeared.  There’s no longer anything in settings to specify the maximum number of messages to retrieve.  There’s no setting for how many days/weeks/months to go back when retrieving messages.  Now it just goes out and grabs everything it can find on the server.

I do not care for this behavior.  I have a tendency to leave a lot of old messages on my mail server most of the time, periodically doing a big purge.  This habit was perfectly accommodated by the old Mail app, not so much by the new one.


Apple has changed the way folders work on your home screen. Before, clicking a folder would cause it to expand to show the contents. Each folder could contain a number of app icons equivalent to two rows less than the overall top-level screen.  So if you had an iPhone 4, for example, with 4 rows of icons per page plus the additional row at the bottom that was common to all pages, you’d get 3 rows of 4 icons, for a total 12 items per folder.  The taller screen of the iPhone 5 would give you 1 more row for a total of 16 items.

With iOS 7, when you open a folder, it creates a little window that is centered on the screen, showing the folder’s contents.  Only, now it’s a fixed 3×3 array with multiple pages you can switch between by swiping a finger across, just like the main pages.

I like the ability to have more items in a folder, but the small 3×3 array means that I have to scroll through folders where I previously saw all the icons at once.  That sucks and it’s completely unnecessary. For starters, the windowed area where the folder is shown could be larger.  Within the folder, you could also tighten up the spacing of the icons a bit to make room.  I’m resigned to it staying this way, however… we’re on version 7 now and still don’t have that kind of control over icons on the main page, because Apple doesn’t think users need to worry about things like that.

Sluggish Response

I’m also noticing my iPad seems kind of slow to respond to input now that it’s running iOS 7.  Maybe that is because I was downloading updates, but I don’t recall having seen this before.  Hopefully it will go away once the updates are gone.

Well that’s all for now… more to come once I’ve used it a bit more.

September 19th, 2012 by Mike Fulton
Categories: Apple, iPhone, Mobile

Apple’s announcement of the iPhone 5 last week has prompted a lot of people to take note of the fact that it’s very much an incremental, evolutionary step forward, not so much a revolutionary one. Most of the new hardware features are either relatively small improvements or else just catching up to what other phones have already had for the last year or so.

A good example is the “4G” support. Although it appears at first glance that Apple’s done a pretty good job at implementing support for the newer, faster data transfer protocol, other phones have had this for awhile now. In fact, most people were surprised that last year’s iPhone 4S model didn’t support it.

A larger screen? This could arguably be a bad thing if it makes the phone overly bulky, but fortunately the screen really isn’t THAT much bigger, and the difference is offset by the phone’s reduced thickness and weight. Still, it’s hardly a revolutionary step forward, and realistically Apple is just responding to the competition, which has been using a variety of larger screen sizes over the past year.

The camera is a little better, offering 1080p video recording instead of just 720p. The processor is faster and a bit better regarding power consumption. What else is new about the hardware… the docking connector? This is really a few years overdue, and even with adapters available it’s going to play havoc with the accessories market. For now I have to say that this one is as much bug as it is feature.

Let’s turn our attention to the software. iOS 6 is a decent enough upgrade, I suppose, but it doesn’t break any new ground either. The 3D mode of the new Maps app is kinda cool, but really nothing we haven’t seen before on other platforms.

What’s The Next Big Thing?

The evolutionary nature of the iPhone 5 has led many to speculate on what the next big revolution will be, and if it will come from Apple or someone else. I’ve seen people suggesting all sorts of ideas, quite a lot of which seem pretty far fetched to me. So much so, in fact, that it’s prompted me to make this post in order to respond to some of them.

One idea that I’ve seen thrown out a few times is that the ubiquitous touch screen we’ve all come to know and love, or in some cases hate, is an evolutionary dead-end.  The cellphone equivalent of a Neanderthal. Some are suggesting that the next big paradigm shift will abandon the touch screen in favor of another sort of interface.  Alternatives suggested include a holographic display that uses hand gestures in mid-air, like the computer setup in the movie Minority Report.  Or how about a completely voice-operated phone?

People have already demonstrated computer user interfaces like those in Minority Report using off-the-shelf devices like Microsoft’s Kinect so we know such things are possible, but I have strong doubts about how practical they are in the real world, and even stronger doubts about how applicable the idea is to being the primary means of operating a mobile device.

The first problem is that anybody using the gesture system shown in the movie as a primary UI had better be in awesome shape if they intend to use it for more than a few minutes at a time.  All of those gestures are essentially a big aerobics workout.  That might be OK in small doses, but for those of us who spend several hours a day on the computer, I don’t think it’s going to work quite so well. You thought carpal tunnel syndrome was annoying, how about worrying about getting a coronary from doing a file backup to the network?

The second problem lies in using such an interface for a mobile device. If we forget about the aerobics workout aspect for a moment, it’s not hard to imagine using such an interface for a stationary workstation in an office or someone’s home, but for a mobile device? Seriously?

Assuming you can build the necessary sensors into the device in the first place, is using the Minority Report interface with a mobile device even slightly practical? Do the people suggesting this nonsense really think that people are going to take their phones out of their pockets, set them down on the floor or a table top, and start waving their arms around?  In public?

Not to mention the fact that a holographic display puts all your information in mid-air for everybody to see. That leads us to the next idea, which is a device that uses something like Google Glasses for the display.

I could see the glasses idea being combined with the Minority Report gesture UI. You wouldn’t have to take your phone out and set it down somewhere, presuming the sensors are built into the glasses. So it’s somewhat more practical. However, you’re still faced with the idea of people waving their arms around to control the device, and between the aerobics workout aspect of things and the pure embarrassment involved, I think this might not catch on with everybody.

A completely voice-operated phone is perhaps a bit more practical. At least in some respects.  Maybe.  For some specific apps, anyway.  But certainly not everything is going to work right with this setup.  For example, what about the first time you have to enter a password on a voice-operated phone while you’re in the middle of a bunch of people? Or what if it’s simply too dang noisy and the phone can’t hear you? I think voice activation is absolutely going to be an even bigger thing down the road than it is already, but I don’t think it’s practical as the only means of operating your device.

And answer this… how does one play Fruit Ninja with a voice-operated phone?

The Real Danger To Touch Screens

To me, the biggest danger facing the future of the touch screen isn’t some better idea waiting in the wings, but rather it’s all the patent nonsense going on right now between the iOS and Android camps. Frankly, I think there’s been a huge mistake made at the patent office in granting a lot of these patents in the first place.

Patents aren’t supposed to be granted to mechanisms and methods that are obvious given the context in which they’re found. Not necessarily obvious to the general public, but obvious to someone working with such technology. Likewise, patents aren’t supposed to be granted for things which have been demonstrated through prior art, even if it was a fictional representation rather than real technology.

I think a lot of the touch screen-related patents fall into those categories. Especially the prior-art thing. Patenting the electrical hardware mechanism for a capacitive touch screen that can recognize multi-touch? Perfectly reasonable. But given the context of having a multi-touch screen, patenting a particular multi-touch gesture is just ridiculous.

There’s plenty of prior art demonstrating touch screen gestures. If you go back to early demos of the Microsoft Surface table-top touchscreen device, which was not a capacitive screen, but which DID recognize multi-touch, they use many of the same gestures that Apple managed to patent later on. And you could probably put together at least 10 or 15 minutes of footage from from old Star Trek episodes showing touch screen interfaces that use these same gestures.

Following the recent Samsung-Apple patent lawsuit, where Apple won, the jury foreman admitted in an interview that the jury skipped over the prior art question because it was bogging them down. Seriously? The validity of a patent is key in an infringement lawsuit, and they just skipped over it? Wow.

I think Apple is a very innovative company and many of the patents surrounding the iPhone and iPad are quite reasonable. But there’s a good number that just make no sense at all. If these patents are ultimately upheld, I think it’s going to effectively give Apple a monopoly on the touch screen, and I don’t at all think that’s a good thing. It’s going to make other companies spend a lot of time and effort trying to come up with alternatives, most of which probably won’t work as well.

« Previous Entries