Apple’s Siri page is getting better!
Over the past couple years it’s become a thing, in the nerd community, to complain incessantly about how inadequate Siri is. To which I incessantly roll my eyes. I’ve written many times about Siri and it’s mostly positive because my experience has been mostly positive. Siri’s not perfect but in my experience Siri is usually a pretty great experience. A month ago HomePod came into my house and so I’ve been integrating it into my daily flow. I’d actually started a “Month with HomePod” sort of post but decided to fold it into this post because something shifted in my thinking about it over the past day and it has to do with Siri and iOS as an ecosystem.
It began with Jim Dalrymple‘s post over at The Loop: Siri and our expectations. I like the way he’s discussing Siri here. Rather than just complain as so many do he’s breaking it down in terms of expectations per device and the resulting usefulness and meeting of expectations. To summarize, he’s happy with Siri on HomePod and CarPlay but not iPhone or Watch. His expectations on the phone and watch are higher and they are not met to which he concludes: “It seems like such a waste, but I can’t figure out a way to make it work better.”
As I read through the comments I came to one by Richard in which he states, in part:
“I’ve improved my interactions with Siri on both my iPhone 8 and iPad Pro by simply avoiding “hey Siri” and instead, holding down the home button to activate it. Not sure how that’s done on an iPhone X but no doubt there’s a way….
A lot of folks gave up on Siri when it really sucked in the beginning and like you, I narrowed my use to timers and such. But lately I’m expanding my use now that I’ve mostly dumped “hey Siri” and am getting much better results. Obviously “hey Siri” is essential with CarPlay but it works well there for some odd reason.”
Since getting the HomePod I’ve reserved “Hey Siri” for that device and the watch. My iPads and iPhone are now activated via button and yes, it seems better because it’s more controlled, more deliberate and usually in the context of my iPad workflow. In particular I like the feel of activating Siri with the iPad and the Brydge keyboard as it has a dedicated Siri key on the bottom left of the keyboard. The interesting thing about this keyboard access to Siri is that it it feels more instantaneous.
Siri is also much faster at getting certain tasks done on my screen than tapping or typing could ever would be. An example, searching my own images. With a tap and a voice command I’ve got images presented in Photos from whatever search criteria I’ve presented. Images of my dad from 2018? Done. Pictures of dogs from last month? Done. It’s much faster than I could get by first opening the Photos app and then tapping into a search. Want to find YouTube videos of Stephen Colbert? I could open a browser window and start a search which will load results in Bing or type in YouTube and wait for that page to load then type in Stephen Colbert and hit return and wait again. Or, I can activate Siri and say “Search YouTube for Stephen Colbert” which loads much faster than a web page then I can top the link in the bottom right corner to be taken to YouTube for the results.
One thing I find myself wishing for on the big screen of the iPad is that the activated Siri screen be just a portion of the screen rather than a complete take-over of the iPad. Maybe a slide-over? I’d like to be able to make a request of Siri and keep working rather than wait. And along those lines, if Siri were treated like an app allowing me to go back through my Siri request history. The point here is that Siri isn’t just a digital assistant but is, in fact, an application. Give it a persistent form with it’s own window that I can keep around and I think Siri would be even more useful. Add to that the ability to drag and drop (that would come with it’s status as an app) and it’s even better.
Which brings me to voice and visual computing. Specifically, the idea of voice first computing as it relates to Siri, HomePod and others such as Alexa, Google, etc. After a month with HomePod (and months with AirPods) I can safely say that while voice computing is a nice supplement to visual for certain circumstances, I don’t see it being much more than that for me anytime soon, if ever. As someone with decent eyesight and who makes a living using screens, I will likely continue spending most of my days with a screen in front of me. Even HomePod, designed to be voice first, is not going to be that for me.
I recently posted that with HomePod as a music player I was having issues choosing music. With an Apple Music subscription there is so much and I’m terrible at remembering artist names and even worse at album names. It works great to just ask for music or a genre or recent playlist. That covers about 30% of my using playing. But I often want to browse and the only way to do that is visually. So, from the iPad or iPhone I’m usually using the Music app for streaming or the remote app for accessing the music in my iTunes library on my MacMini. I do use voice for some playback control and make the usual requests to control HomeKit stuff. But I’m using AirPlay far more than I expected.
Using the Music app and Control Center from iPad or iPhone is yet another way to control playback.
Apple has made efforts to connect our devices together with things such as AirDrop and Handoff. I can answer a call on my watch or iPad. At this point everything almost always remains in constant sync. Moving from one device to another is almost without any friction at all. What I realize now is just how well this ecosystem works when I embrace it as an interconnected system of companions that form a whole. It works as a mesh which, thanks to HomeKit, also includes lights, a heater, coffee maker with more devices to come in the future. An example of this mesh: I came in from a walk 10 minutes ago and I was streaming Apple Music on my phone, listening via AirPods. When I came inside I tapped the AirPlay icon to switch the audio output to HomePod. But I’m working on my iPad and can control the phone’s playback via Apple Music or Control Center on the iPad or, if I prefer, I can speak to the air to control that playback. A nice convenience because I left the phone on the shelf by the door whereas the iPad is on my lap.
At any given moment, within this ecosystem, all of my devices are interconnected. They are not one device but they function as one. They allow me to interact visually or with voice with different iOS devices in my lap or across the room as well as with non-computer devices in HomeKit which means I can turn a light off across the room or, if I’m staying late after a dinner at a friends house, I can turn on a light for my dogs from across town.
So, for the nerds that insist that having multiple timers is very important, I’m glad that they have Alexa for that. I’m truly happy that they are getting what it is they need from Google Assistant. As for myself, well, I’ll just be over here suffering through all the limitations of Siri and iOS.
Last spring I finally purchased my first smart plug, a Homekit compatible plug from KooGeek. It worked. I bought a second. A few weeks later the local Walmart had the isp6 HomeKit compatible plugs from iHome on sale. Only $15. I bought three. My plan was to use these with lights and to have one for my A/C in the summer to be swapped out to the heater in my well-house in the winter. I’m pretty stingy in my use of energy so in the winter I make it a point to keep that heater off and only turn it on when when I must which requires a good bit of effort on my part. I don’t mind the walking out to the well house as I can always use the steps but it’s the mental tracking of it and the occasional forgetting that is bothersome. Having a smart plug makes it convenient to power it on and off but I’m still having to remember to keep tabs.
Enter automations. The Home app gets better with each new version. By using automations it is now possible to automate a scene or a device or multiple devices at specific times or sunset/sunrise or a set time after sunset/sunrise or before. Very handy for a morning light but not too helpful for my well-house heater. But wait, I can also set-up an automation for a plug based on a Homekit sensor such as the iHome 5-in-1 Smart Monitor. I put the monitor in the well-house and create an automation to turn on the heater if the temperature dips to 32. I’ve turned my not-so-smart heater into a smarter one which will keep my water from freezing with no effort from myself. Even better, it will reduce my electricity use because of it’s accuracy.
I have a similar dumb heater in my tiny house as well as a window A/C. I might use the same monitor to more accurately control heating and cooling in here. Currently I do that with constant futzing with controls and looking at a simple analog thermometer. It would be an improvement to just have a set temperature to trigger devices.
I’ve been avoiding purchasing Homekit compatible lights because most, such as those from Phillips, also required purchase of a hub. Also, cost was a bit much. My reasoning being that if I just pick-up smart plugs as they are on sale I can use those for lights or anything else. Cheaper and more versatile. That said, one benefit of the lights is that they can be dimmed which is appealing. So, two weeks ago I picked up one of Sylvania’s Smart bulbs. It works perfectly. I’ll likely get another but in my tiny house I don’t need that many lights so two dimmable bulbs will likely be enough. It’s very nice to be able to ask Siri to set the lights at 40% or 20% or whatever. I have an automation that kicks on the light to 15% at my wake-up time. Very nice to wake up to a very low, soft light. With a simple request I can then ask Siri to raise the brightness when I’m actually ready to get out of bed.
An hour after sunrise I’ve got another set of LEDs that kick on for all of my houseplants that sit on two shelves by the windows. An hour after sunset those lights go off and at the same time the dimmable light comes on at 50%.
All of the set-up happens via the built in iOS Home app. It’s a fairly easy to use app that gets better with each new version of iOS. My set-up is pretty simple but Home is designed to scale up with larger homes with more rooms and devices. In my case, I’ve go the Home app split screen with Apple Music on my iPad Air 2. It’s on a shelf within easy reach of my usual sitting spot on my futon/bed. While I do most interaction via Siri or automation it’s nice to have easy visual access. Especially handy for monitoring the well house heater and temperature. Having Music open and ready to play to a speaker via AirPlay is very nice.
AppleTV as Hub
Of course, to really make this work a hub is required. A recent iPad running iOS 10 or one of the newer AppleTVs will work. I’m using the AppleTV because I’ve always got one on. Set-up was easy and I’ve never had to futz with it. The nice thing about this set-up is that I can access my Homekit devices from anywhere. Whether I’m in town or visiting family or out for a walk, checking or changing devices is just a couple taps or request from Siri.
Last is the device that has not arrived yet. My HomePod is set to arrive February 9. I don’t need it for any of this to work but I suspect it will be a nice addition. Controlling things with Hey Siri has always worked pretty well for me though I suspect it will be even better with HomePod. Will find out soon.
As I listened to this episode I found myself nodding along for much of it. Roemmele is very passionate about voice first computing and certainly seems to know what he’s talking about. In regards to Siri, his primary argument seems to be that Apple made a mistake in holding Siri back after purchasing the company. At the time Siri was an app and had many capabilities that it no longer has. Rather than take Siri further and developing it into a full-fledged platform Apple reigned it in and took a more conservative approach. In the past couple of years it has been adding back in, via Siri Kit, what it calls domains.
Apps adopt SiriKit by building an extension that communicates with Siri, even when your app isn’t running. The extension registers with specific domains and intents that it can handle. For example, a messaging app would likely register to support the Messages domain, and the intent to send a message. Siri handles all of the user interaction, including the voice and natural language recognition, and works with your extension to get information and handle user requests.
So, they scaled it back and are rebuilding it. I’m not a developer but my understanding of why they’ve done this is, in part, to allow for a more varied and natural use of language. But as with all things internet and human, people often don’t want to be bothered with the details. They want what they want and they want it yesterday. In contrast to Apple’s handling of Siri we have Amazon which has it’s pedal to the floor.
Roemmele goes on to discuss the rapid emergence of Amazon’s Echo ecosystem and the growth of Alexa. Within the context of this podcast and what I’ve seen of his writing, much of his interest and background seems centered on commerce and payment as they relate to voice. That said, I’m just not that interested in what he calls “voice commerce”. I order from Amazon maybe 6 times a year. Now and in the foreseeable future I get most of what I need from local merchants. That said, even when I do order online I do so visually. I would never order via voice because I have to look at details. Perhaps I would use voice to reorder certain items that need to be replaced such as toilet paper or toothpaste but that’s the extent of it.
What I’m interested in is how voice can be a part of the computing experience. There are those of us that use our computers for work. For the foreseeable future I see myself interacting with my iPad visually because I can’t update a website with my voice. I can’t design a brochure with my voice. I can’t update a spreadsheet with my voice. I can’t even write with my voice because my brain has been trained to write as I read on the screen what it is I’m writing.
But this isn’t the computing Roemmele is discussing. His focus is “voice first devices”, those that don’t even have screens, devices such as the Echo and the upcoming HomePod1. And the tasks he’s suggesting will be done by voice first computing are different. And this is where it get’s a bit murky.
Right now my use of Siri is via the iPhone, iPad, AppleWatch and AirPods. In the near future I’ll have Siri in the HomePod. How do I make the most of voice first computing? What are these tasks that Siri will be able to do for me and why is Roemmele so excited about voice first computing. The obvious stuff would be the sorts of things assistants such as Siri have been touted as being great for: asking about for the weather, adding things to reminders, setting alarms, getting the scores for our favorite sports ball teams and so on. I and many others have written about these sorts of things that Siri has been doing for several years now. But what about the less obvious capabilities?
At one point in the podcast the two discuss using voice for such things as sending text. I often use dictation when I’m walking to dictate a text into my phone when using Messages and I see the benefit of that. But dictation, whether it is dictating to Siri or directly to the Messages app or any other app, at least for me, requires an almost different kind of thinking. It may be that I am alone in this. But it is easier for me to write with my fingers on the keyboard then it is to “write” with my mouth through dictation. It might also be that this is just a matter of retraining my brain. I can see myself dictating basic notes and ideas. But I don’t see myself “writing” via dictation.
At another point Roemmele suggests that apps and devices will eventually disappear as they are replaced by voice. At this point I really have to draw a line. I think this is someone passionate about voice first going off the rails. I think he’s let his excitement cloud his thinking. Holding devices, looking, touching, swiping, typing and reading, these are not going away. He seems to want it both ways though at various points he acknowledges that voice first doesn’t replace apps so much as it is a shift in which voice becomes more important. That I can agree with. I think we’re already there.
Two last points. First, about the tech pundits. Too often people let their own agenda and preference color their predictions and analysis. The lines blur between their hopes and preferences and what is. No one knows the future but too often act as they do. It’s kinda silly.
Second, what seems to be happening with voice computing is simply that a new interface has suddenly become useful and it absolutely seems like magic. For those of us who are science fiction fans it’s a sweet taste of the future in the here and now. But, realistically, its usefulness is currently limited to very fairly trivial daily tasks mentioned above. Useful, convenient and delightful? Yes, absolutely. Two years ago I had to go through all the trouble of putting my finger on a switch, push a button or pull a little chain, now I can simply issue a verbal command. No more trudging through the effort of tapping the weather app icon on my screen, not for me. Think of all the calories I’ll save. I kid, I kid.
But really, as nice an addition as voice is, the vast majority of my time computing will continue to be with a screen. I don’t doubt that voice interactions will become more useful as the underlying foundation improves and I look forward to the improvements. As I’ve written many times, I love Siri and use it every day. I’m just suggesting that in the real world, adoption of the voice interface will be slower and less far reaching than many would like.
- Actually, technically, the HomePod technically has a screen but it’s not a screen in the sense that an iPhone has a screen. ↩︎
A couple years ago a good friend got one of the first Alexa’s available. I was super excited for them but I held off because I already had Siri. I figured Apple would eventually introduce their own stationary speaker and I’d be fine til then. But as a big fan of Star Trek and Sci-fi generally, I love the idea of always present voice-based assistants that seem to live in the air around us.
I think he and his wife still use their Echo everyday in the ways I’ve seen mentioned elsewhere: playing music, getting the news, setting timers or alarms, checking the weather, controlling lights, checking the time, and shopping from Amazon. From what I gather that is a pretty typical usage for Echo and Google Home owners. That list also fits very well with how I and many people are using Siri. With the exception of getting a news briefing which is not yet a feature. As a Siri user I do all of those things except shop at Amazon.
The tech media has recently gone crazy over the pervasiveness of Alexa at the 2018 CES and the notable absence of Siri and Apple. Ah yes, Apple missed the boat. Siri is practically dead in the water or at least trying to catch-up. It’s a theme that’s been repeated for the past couple years. And really, it’s just silly.
One in six US adults (or around 39 million people) now own a voice-activated smart speaker, according to research from NPR and Edison Research. The Smart Audio Report claims that uptake of these devices over the last three years is “outpacing the adoption rates of smartphones and tablets.” Users spent time using speakers to find restaurants and businesses, playing games, setting timers and alarms, controlling smart home devices, sending messages, ordering food, and listening to music and books.
Apple iOS devices with Siri are all over the planet rather than just the three or four countries the Echo is available in. Look, I think it’s great that the Echo exists for people that want to use it. But the tech press needs to pull it’s collective head out of Alexa’s ass and find the larger context and a balance in how it discusses digital assistants.
Here’s another bit from the above article and research:
The survey of just under 2,000 individuals found that the time people spend using their smart speaker replaces time spent with other devices including the radio, smart phone, TV, tablet, computer, and publications like magazines. Over half of respondents also said they use smart speakers even more after the first month of owning one. Around 66 percent of users said they use their speaker to entertain friends and family, mostly to play music but also to ask general questions and check the weather.
I can certainly see how a smart speaker is replacing radio as 39% reported in the survey. But to put the rest in context, it seems highly doubtful that people are replacing the other listed sources with a smart speaker. Imagine a scenario where people have their Echo playing music or a news briefing. Are we to believe that they are sitting on a couch staring at a wall while doing so? Doing nothing else? No. The question in the survey: “Is the time you spend using your Smart Speaker replacing any time you used to spend with…?”
So, realistically, the smart speaker replaces other audio devices such as radio but that’s it. People aren’t using it to replace anything else in that list. An Echo, by it’s very nature, can’t replace things which are primarily visual. As fantastic as Alexa is for those that have access to it, for most users it still largely comes down to that handful of uses listed above. In fact, in another recent article on smart speakers, The New York Times throws a bit of cold water on the frenzied excitement: Alexa, We’re Still Trying to Figure Out What to Do With You
The challenge isn’t finding these digitized helpers, it is finding people who use them to do much more than they could with the old clock/radio in the bedroom.
A management consulting firm recently looked at heavy users of virtual assistants, defined as people who use one more than three times a day. The firm, called Activate, found that the majority of these users turned to virtual assistants to play music, get the weather, set a timer or ask questions.
Activate also found that the majority of Alexa users had never used more than the basic apps that come with the device, although Amazon said its data suggested that four out of five registered Alexa customers have used at least one of the more than 30,000 “skills” — third-party apps that tap into Alexa’s voice controls to accomplish tasks — it makes available.
Now, back to all the CES related news of the embedding of Alexa in new devices and/or compatibility. I’ve not followed it too closely but I’m curious about how this will actually play out. First, of course, there’s the question of which of these products actually eventually make it to market. CES announcements are notorious for being just announcements for products that never ship or don’t ship for years into the future. But regardless, assuming many of them do, I’m just not sure how it all plays out.
I’m imagining a house full of devices many of which have microphones and Alexa embedded in them. How will that actually work? Is the idea to have Alexa, as an agent that listens and responds as she currently does in a speaker, but also in all of the devices be they toilets, mirrors, refrigerators… If so, that seems like overkill and unnecessary costs. Why not just the smart speaker hub that then intelligently connects to devices? Why pay extra for a fridge with a microphone if I have another listening device 10 feet away? This begins to seem a bit comical.
Don’t get me wrong, I do see the value of increasing the capabilities of our devices. I live in rural Missouri and have a well house heater 150 feet away from my tiny house. I now have it attached to a smart plug and it’s a great convenience to be able to ask Siri to turn it off and on when the weather is constantly popping above freezing only to drop below freezing 8 hours later. It’s also very nice to be able to control lights and other appliances with my voice, all through a common voice interface.
But back to CES, the tech press and the popular narrative that Alexa has it all and that Siri is missing out, I just don’t see it. A smart assistant, regardless of the device it lives in, exists to allow us to issue a command or request, and have something done for us. I don’t yet have Apple’s HomePod because it’s not available. But as it is now, I have a watch, an iPhone and two iPads which can be activated via “Hey Siri”. I do this in my home many times a day. I also do it when I’m out walking my dogs. Or when I’m driving or visiting friends or family. I can do it from a store or anywhere I have internet. If we’re going to argue about who is missing out, the Echo and Alexa are stuck at home while Siri continues to work anywhere I go.
So, to summarize, yes, stationary speakers are great in that their far-field microphones work very well to perform a currently limited series of tasks which are also possible with the near-field mics found in iPhones, iPads, AirPods and the AppleWatch. The benefit of the stationary devices are accurate responses when spoken to from anywhere in a room. A whole family can address an Echo whereas only individuals can address Siri in their personal devices and have to be near their phone to do so. Or in the case of wearables such as AirPods or AppleWatch, they have to be on person. By contrast, these stationary devices are useless when we are away from the home when we have mobile devices that still work.
My thought is simply this, contrary to the chorus of the bandwagon, all of these devices are useful in various ways and in various contexts. We don’t have to pick a winner. We don’t have to have a loser. Use the ecosystem(s) that works best for you If it’s Apple and Amazon enjoy them both and use the devices in the scenarios where they work best. If it’s Amazon and Google, do the same. Maybe it’s all three. Again, these are all tools, many of which compliment each other. Enough with the narrow, limiting thinking that we have to rush to the pronouncement of a winner.
Personally, I’m already deeply invested in the Apple ecosystem and I’m not a frequent Amazon customer so I’ve never had a Prime membership. I’m on a limited budget so I’ve been content to stick with Siri on my various mobile devices and wait for the HomePod. But if I were a Prime member I would have purchased an Echo because it would have made sense for me. When the HomePod ships I’ll be first in line. I see the value of a great sounding speaker with more accurate microphones that will give me an even better Siri experience. I won’t be able to order Amazon products with the HomePod but I will have a speaker with fantastic audio playback and Siri which is a trade off I’m willing to make.
It was a year ago that began selling the AirPods and they were sold out instantly. In fact, it was difficult to get them for months as Apple struggled to keep up with demand. Production finally caught in mid-summer up only to fall behind in recent weeks as holiday demand surged. I ordered mine within minutes of them going on sale so was lucky enough to get in on the first shipment. I’ve worn them many times a day every day since they arrived.
It’s been said by many over the past year that the AirPods were their favorite Apple product in recent memory. There’s no doubt, they are a delight to use. For anyone that enjoys music or podcasts on the go, especially those with an iPhone or Apple Watch, these are well worth the cost.
A few highlights:
- They stay in my ears very well and many report the same thing. Even if the fit is not perfect, because there is no wire tugging, they tend to stay put.
- The batteries last 3-4 hours and recharge very quickly in the case which lasts for 3-4 days.
- Siri works fantastically.
- Phone calls are great. The mic does a great job of cancelling out background noise providing clean audio for the person I’m talking to.
- With the occasional oddball exception, they pair up quickly with whatever device I’m trying to use. Usually iPhone or AppleWatch, sometimes an iPad.
- I’m often streaming music from my iPhone to the AppleTV. When I head out for a walk I pop the AirPods in and the music switches to them with no action from me. That’s the kind of magic that makes me smile.
- They rarely drop the connection and have a pretty fantastic range. I often step outside my tiny house, forgetting the phone inside (sometimes leaving it deliberately) and can take care of little tasks such as refilling bird feeders, watering plants on the deck, etc. A 30 foot range is pretty typical. At about 40 feet they start to drop a bit.
- I use them a lot with Siri to control audio especially in the winter when my phone is in a pocket, I’m wearing gloves and the watch is under layers of clothing. I often tap through hats and hoods to activate Siri and it works great to change artist, repeat a song, skip forward, etc. Same to answer or initiate a call.
- I have not lost them. They are in my ears or in the case. The case is on a shelf (they have their spot) or in my pocket. Basically I treat them the same way I treat other little things such as my keys.
The Siri team has a great post about the evolution of Siri’s speech synthesis on the Apple Machine Learning Journal:
Siri is a personal assistant that communicates using speech synthesis. Starting in iOS 10 and continuing with new features in iOS 11, we base Siri voices on deep learning. The resulting voices are more natural, smoother, and allow Siri’s personality to shine through. This article presents more details about the deep learning based technology behind Siri’s voice.
Just scroll down to the bottom and listen to the progression between iOS 9, 10, and 11. It’s really impressive.
I’m surprised more beta users have not said more about this over the duration of the public betas. Until this post by Apple I’ve not seen it mentioned even once. Personally I will say that I consider it a fantastic improvement and thought it was one of the highlights of the WWDC Keynote. When I installed the public beta on my iPad the second first thing I did was invoke Siri so I could hear her new voice. So much better!
So techie and web publisher Joshua Topolsky recently went on a very emotional, not too rational, Twitter tirade regarding the iPad Pro. Just a tiny example:
Couple of tweets about the new iPad and iOS 11. It is inferior toa laptop in almost every way, unless you like to draw.
If you think you can replace you laptop with this setup: youcannot. Imagine a computer, but everything works worse thanyou expect. […]
But this doesn’t COME CLOSE to replacing your laptop, even forsimple things you do, like email. AND one other thing. Apple’skeyboard cover is a fucking atrocity. A terrible piece ofhardware. Awkward to use, poor as a cover. Okay in a pinch if youneed something LIKE a keyboard.
This whole “can an iPad replace your laptop” discussion is really silly. We live in a world of many devices that come in many forms. They are complimentary. Back in 1993 I bought my first computer, a Mac Color Classic. That was my only computer until 1997. It was a desktop. I used it for school and for email. In 1998 I wanted a computer that would run Netscape. That’s right, my $2,500 desktop would not even run a web browser. So I purchased a Mac Performa 6400! That’s the machine I used to build my first website. And then another and another. It’s also the machine I used to begin dabbling in “desktop publishing”. Then a Lime iMac a couple years later. Then 1st gen blue iBook. And so on. But at any given time I owned and used one computer. Then the iPod came in 2001 and now I had another computer though I didn’t think of it as a computer. At some point around 2005 I found myself with both a laptop (PowerBook 12″) and a desktop (iMac G5) and I wasn’t very clear at the time which one I wanted to use on any given day. I could share files between them but it was an awkward sort of back and forth. I also used a video camera and a still camera and a cheap mobile phone. Lots of wires for charging and transferring data.
Skip forward to 2010 and I was using a Mac Mini for a media player, a 2009 MacBook Pro for my work, and a 1st gen iPad for email and web browsing. No iPhone yet, just a cheap mobile. Also, separate still and video cameras. Transfer between devices still awkward. Each device with a pretty well defined purpose.
It’s now 2017 and my workflow has completely changed. I am surrounded by devices that communicate with one another flawlessly. Sometimes locally, other times via iCloud or Dropbox. The iPhone replaced the iPod, mobile phone as well as the still and video cameras. A newer Mac Mini serves primarily as a media server but also now does duty an occasional work machine for InDesign projects. I watch movies and listen to music via an AppleTV. I also watch movies and listen to music via the iPad and iPhone. I have wireless AirPods that switch between all of my devices with just a single tap or click. I have Smart plugs that I control via Siri and the network to turn devices on or off. By this time next year I expect to have a HomePod which will be yet another computer in this ecosystem.
Another aspect of this is the fundamental truth that most of what we do on a daily basis relies on the internet, on countless computers around the globe. The music I’m streaming through my iPhone to my AirPods comes from an Apple server I don’t really think about. Same for my email. Same for the web page I’m browsing. The screen in front of me might be the most intimate, the most directly interacted with, but it is just one of countless computers I rely on in the interconnected reality of 2017.
In 1993 I used my “desktop” Mac to do a very tiny number of jobs. But in form factor it was indeed a desktop computer. With each new iteration my computer changed in form factor, flexibility, power, and, as a result, the number of jobs I could do with it expanded. My first Mac did not include a modem, the second had both a modem and Ethernet. The third was the first to include wireless network access. But none of them could be an everyday still or video camera, that wouldn’t come till later.
By comparison, my iPad today seems limitless in power. It is a lightweight, impossibly thin computer that can be used in too many ways for me to count. I can input data with my finger, a keyboard, a stylus, or my voice. I can hold it with a keyboard or without. I can lay flat on my back and use it in bed. I can use it while walking. I can speak to it to request a weather forecast or to control devices in my home. In the near future I’ll be able to point it at a window or object in my environment to use the camera to get a precise measurement of the dimensions of the object. The same might be said of the iPhone.
We’ve reached a point where it’s probably best to just acknowledge that incredibly powerful computers now come in a variety of forms and that they perform a limitless list of jobs for us and that which tool we use at any given moment is likely to become a less interesting topic. Just use what works best for you in any given situation. There’s really no reason to draw lines in the sand, no reason to argue. Such arguments will become less interesting as time goes on.
A few others have been making similar points. My favorite was by Matt Gemmell. If you’re interested in this sort of thing his whole post is worth a read.
There’s no such thing as a laptop replacement, and if there were, the iPad isn’t meant to be one.
The term usually crops up in the context of the iPad not being whatever it is the author is looking for… and no wonder. The phrase itself is strange, like you’re consciously considering replacing your laptop (implicitly with something else, otherwise you’d just upgrade to a newer laptop, surely), are assessing the iPad as a candidate, and you find that it is indeed an entirely different thing… but that’s somehow a deal breaker. So you want to potentially not use a laptop anymore, but you also want a computer that does all the same things as a laptop, in pretty much the same way. In which case, I think the computer you’re looking for is a laptop.
But people like me and Topolsky — and millions of others — are the reason why Apple continues to work on MacOS and make new MacBook hardware. I can say without hesitation that the iPad Pro is not the work device for me. I can also say without hesitation that the iPad Pro with a Smart Keyboard is the work device for millions of other people.
A MacBook is better in some ways; an iPad is better in others. For some of us, our personal preferences fall strongly in one direction or the other. “Imagine a computer, but everything works worse than you expect” is no more fair as criticism of the iPad than a statement like “Imagine an iPad but everything is more complicated and there’s always a jumble of dozens of overlapping windows cluttering the screen” would be as criticism of the Mac.
Rene Ritchie, writing for iMore, Giving iPad fire to mere mortals: On myopia and elitism in computing:
For a long time computing only addressed the needs of a very few. Now, thanks to iPad and products that have followed its lead, computing is open to almost everyone with almost any need. It’s nothing short of a revolution.
People who were, for their whole lives, made to feel stupid and excluded by older computing technology and some of its advocates now have something that’s approachable, accessible, and empowering. From toddlers to nonagenarians to every age in between, and for every profession imaginable.
What Apple and iPad have done to bring computing to the mainstream is not only laudable, it’s critical. And it’s nothing short of amazing.
And, not a response but a great post by Fraser Speirs from nearly two years ago is worth a read as it turns the whole argument about the iPad being a laptop replacement on it’s head:
There has been a lot of talk in recent weeks about the MacBook Pro and, in particular, whether it can replace an iPad Pro for getting real work done.
Firstly, consider the hardware. The huge issue with the MacBook Pro is its form factor. The fact that the keyboard and screen are limited to being held in an L-shaped configuration seriously limits its flexibility. It is basically impossible to use a MacBook pro while standing up and downright dangerous to use when walking around. Your computing is limited to times when you are able to find somewhere to sit down.
Wow. So much going on in the run-up to WWDC. As most have said, it looks to be a big one with likely hardware announcements. Apple seems to be releasing bits of news this week that would normally have been in the keynote prompting many to suggest that they are making way for a jam-packed presentation.
I’m not an educator but if I were I’d be very excited about what Apple is doing with Swift Playgrounds. The next update, due Monday, expands coding education to robots, drones and musical instruments :
Apple is working with leading device makers to make it easy to connect to Bluetooth-enabled robots within the Swift Playgrounds app, allowing kids to program and control popular devices, including LEGO MINDSTORMS Education EV3, the Sphero SPRK+, Parrot drones and more.
That’s going to be a lot of fun. On the topic of Swift, Fraser Speirs has an excellent post about teaching Swift over the past year.
I’m looking forward to new iPads being announced and hopefully the long rumored and hoped for “Siri Speaker”. And of course all of us iPad nerds are hoping for big iPad features with iOS 11. We never know until Apple announces it but I have a feeling (as do many others) that we’re going to see some great stuff Monday!
I've been wanting to try out a HomeKit device for quite awhile now. A friend that uses Alexa first set up a couple of lights well over a year ago and ever since his first demonstration I've been eager to try it out in my tiny house. But I'm stubborn and so I was waiting for a light or plug to drop down to a price I was willing to pay. A few months ago I'd taken note of the Koogeek plugs at Amazon. At about $35 per plug they were about the least expensive HomeKit plug but still I decided to hold out for a sale. Last week I noticed an Amazon deal via 9to5Mac that, with a code, dropped the price down to just under $24 per plug so I bought two of them.
Setting up the lights
They arrived today and I had them set-up in just a few minutes thanks to a very simple process. I installed the Koogeek app and was prompted to set-up an account which I did. Next I was prompted to use the iPad's camera to scan a unique number code that comes with each plug. Upon detection the plug went through an auto set-up and then I was prompted to name it. Done. Each plug took less that a minute. I opened Control Center and sure enough I now had a third panel to the far right where each plug now resided as a button I could select. I touched one and the light popped on. I'm pretty sure I giggled. I touched the other and it lit up. I felt like a wizard. But when I tried to use Siri on my phone it didn't work as it found no devices. Doh. My fault. I was not on my wifi network. I rarely put the iPhone on the wifi as I have limited satellite bandwidth. How to use Siri via my LTE connection? A second later I remembered that I also needed to set-up my AppleTV to serve as a HomeKit Hub. This would allow me to access the plugs via the internet from home or anywhere else. The next question: how to set-up the AppleTV? This was a little less obvious.
Setting up the AppleTV as a hub
I opened the Home app on the iPad and saw no indicator of how to do this. I hopped over to the AppleTV and poked around settings. Didn't see any mention of using the AppleTV as a Home Hub. Did I need an app? Hmm. I asked Siri knowing she'd likely send me to a web search which she did. Two clicks later and I had my answer. I needed to sign into my primary iCloud account on the AppleTV in the accounts section of the Settings app. Duh. Of course it would all go through iCloud. I did that and that was it. Finished. I called to Siri from across the room and requested that one of the lights be turned off. Poof. Neat. I can now control the plugs from anywhere I have internet assuming my cabin internet is connected which it usually is. Sometimes I really do feel like I'm living in an episode of Star Trek.