Category Archives: Siri

A Siri Anecdote

A couple days ago I posted an update to what appears to be a long running, though not necessarily intentional, thread on Siri.

Yesterday, while driving to the store, I got a text reminder from my dentist about an appointment next week. When I parked I read the text and called them to reschedule. I ended the call and asked Siri via AirPods to “cancel next week’s dentist appointment”. She confirmed the date and appointment to cancel and then deleted. I probably could have asked to reschedule rather than delete. Afterwards I asked Siri to create a new appoint for the dentist in January. I gave her the date and time and of course the appointment was created.

It really does feel like living in the future.

Siri, I Trust You. Mostly.

Just as I keep track of the status of Pages on the iPad (as compared to the Mac) I also like to check in on the experience of using Siri. I recently browsed through a short thread on the Mac Power Users Forum and was reminded that I’d not written about the Siri experience in awhile. That thread was quite negative about Siri and in fact, most of what I seem to come across on the internet in regards to Siri is usually negative. Siri is like Lucy holding that football for Charlie Brown. In the early years many people learned that trusting Siri was just a set-up for failure and frustration.

Well, it’s been a few years now. Can we trust Siri?

I’ve been using Siri fairly consistently over the past three years and continue to use Siri many times a day from a variety of devices and generally find the experience to be helpful, usually successful and increasingly pleasant as the voice of Siri is improved to be less robotic. But it’s been a process getting here.

How I use Siri

I make Siri requests from the full ecosystem of devices ranging from an iPhone X to 3 iPads to the Apple Watch and HomePods. Also, occasionally to the AppleTV via remote.

At home most of the requests are handled by the HomePod as it generally takes precedence over other devices in the room. When I’m out it’s the iPhone via AirPods. On occasion I’ll also use a button push on an iPad I’m using to ensure that my interaction is with that iPad though that’s not all that common. I think I’d likely use Siri directly on the iPad if there were a dedicated Siri key or keyboard shortcut.

The Apple Watch and Apple TV are probably the least used Siri devices I have. One feature that may not be immediately obvious to some is that when using HomePods as the audio for AppleTV, it’s possible to control playback via voice, no remote needed. Just issue commands such as pause, play, rewind 20 seconds and the HomePod will control the video. Very nice!

As for the Watch, I’ve tried a few times and it does not work nearly as well as the HomePod or AirPods. More often than not I just get a long delay followed by “I’ll tap you when I’m ready”. Mostly, I’ve stopped trying but it’s no real loss because I’m always within earshot of the HomePod and if I’m not I’ve probably got the AirPods in my ears.

My common uses range the full range of what is possible with Siri. Early on I got in the habit of occasionally reading through the possible actions and check every so often to see what’s been added. As a result of being aware I’ve been able to take better advantage. From timers to adding calendar events to tasks to audio video playback to smart home devices such as heaters and lights. Before I list out more I’ll contrast this with a recent poll I conducted via a persistent group iMessage with my extended family. Here’s what I asked them:

  1. Do you currently use Siri regularly? If yes, how many times per day.
  2. If you do not, have you ever tried it in the past? If yes, why did you stop using it?
  3. If you do use Siri regularly what device(s) do you use to do so?
  4. What are your most common uses/requests?
  5. If you are a regular user, are you generally happy with the experience?
  6. If you are not a user do you think you might at some point try it again? Why or why not?

The results varied. An elderly uncle reported that he uses Siri two times a day from his phone and is happy with it. My aunt reports using it 4 times a day on her phone and she likes it. My dad uses it 10 to 15 times a day on his phone. He uses it to open apps, play music, make phone calls, ask sports questions and set reminders. He thinks it’s great.

My mid-20’s nephew doesn’t use Siri much, only once a day or so. He stopped because she often “can’t immediately answer some of my questions and sends me to Safari.” When he does use it it’s to activate maps and directions or to call people on the phone. My brother uses it in his car to play music. He also reports being turned off by the fact that he’s often sent to Safari after a query.

The last response I got was from my niece, also in her 20s who reports using Siri 10 or so times a day via her phone. She uses it to play music and control playback. She uses it to make calls, ask about sports information, send texts, set timers and check the weather and the time. She uses it while driving for hands free. She concludes by saying that for the most part Siri works well for her and notes improvement in that it picks up her voice better, possibly do to a newer phone.

So, a mix of negative and positive. The negative seems to center on being kicked out to Safari results after a Siri request. What isn’t clear from the responses is what questions are asked that lead to that result. I took note that the two most positive responses, my dad and niece, both specifically indicated a broader range of Siri requests and I think that touches on something important in regards to voice-based computer usage, in this case Siri. Both of these users have made it a point to use voice requests over a broader range of activity. Put another way, it seems that they are being more deliberate and, as a result, are getting better results. My guess is that an interest in using Siri results in more persistence and more practice and, not surprisingly, better results over time.

Of course, it’s just a tiny pool responses from one family but it seems an accurate reflection of much that I’ve read on the internet.

In my own experience I’ve found that over the past 3 to 5 years my usage has certainly increased both as Siri improved and as I learned more about getting better results with the service. This seems obvious if we view Siri as a tool, as a form of interaction that can be improved upon by users over time, but I think because of it’s personal nature of the technology and the sense of possible embarrassment or frustration with failure, we don’t quite view it the same as we view the development of other skills.

By design, Siri and other voice assistants are presented as just that, assistants. They take on a kind of personal role, a sense of relationship. Apple and others have made it a point to make voice assistance sound increasingly human and natural in their interactions and I think one result is possible frustration and embarrassment when we encounter failure. It reminds me of Charlie Brown trusting Lucy to hold that football. Of course, she pulls it away at the last second and he flies through the air. When we trust Siri and she fails us there’s an element of frustration that we went out on a limb to trust that she could help. I think there’s also an almost “out of body” observation we make of ourselves. Oh, how silly, there’s me talking to my phone again and there she is making me look even sillier with her failed response. I may be getting too far out in the weeds here but there may be something to it.

I’ll wrap up with a list of my most useful Siri interactions. And to reiterate, I think this list is getting longer all the time and that the success rate is, in my use, almost always improving.

  • Reminders: I constantly add items to various lists. Both via HomePod and AirPods. This is 100%.
  • Calendar events: This is also 100%. Almost everything I add to my calendar is via Siri.
  • Timers: All the time and it works perfectly.
  • Weather: All the time and again, it works perfectly.
  • Phone calls: I don’t use my phone as a phone much but when I do make a call it’s via AirPods to phone and it’s 100%
  • Sending and replying to texts. This one has gotten much better and I use it all the time when walking, again via AirPods to phone.
  • Audio playback via AirPods when walking is excellent. Pause, play, skip, fast forward, initiating playback of an artist, playlist or album. The hardest part here is my ability to remember the names of things. Great with podcasts too.
  • Control of Homekit devices. All day everyday. About 95% success here. One of my favorite things relates to the fact that I live in a rural area and have an outside well-house that has to stay heated in the winter. In the past I’d go out and visually check on things to confirm proper heating in the cold weeks of late December thru February. Now I can simply ask Siri: “What’s the temperature in the well-house?” It’s the perfect compliment to the Home App.

My Siri Wish List

Right now, at the time of this writing, I’ve got just two big things that I’d love to see and they both are iPad related:

  1. On the new iPads Pro with FaceID, there is no Home Button which is a pretty convenient way to access Siri on older iPads. Even better, when using an external keyboard with older iPads a long press of the keyboard home button activates Siri. Very conducive to using Siri on those devices. For some reason this does not work on the new iPads Pro and so I find I don’t use Siri as much on device as I used to.
  2. When activating Siri on iPads, is there a reason that Siri should take control of the whole screen. Might it be better to do something similar to the Mac and have a smaller Siri window pop-up? Maybe the size of a slide-over window? Or, at most, a half-screen split-view.

I’m sure there’s more to be done to improve Siri but those are the two I’m hoping to see.

Siri and voice first

In a recent episode of his Vector podcast, Rene Ritchie had “voice first” advocate Brian Roemmele. Rene is probably my current favorite Apple blogger and podcaster and Vector is excellent.

As I listened to this episode I found myself nodding along for much of it. Roemmele is very passionate about voice first computing and certainly seems to know what he’s talking about. In regards to Siri, his primary argument seems to be that Apple made a mistake in holding Siri back after purchasing the company. At the time Siri was an app and had many capabilities that it no longer has. Rather than take Siri further and developing it into a full-fledged platform Apple reigned it in and took a more conservative approach. In the past couple of years it has been adding back in, via Siri Kit, what it calls domains.

Apps adopt SiriKit by building an extension that communicates with Siri, even when your app isn’t running. The extension registers with specific domains and intents that it can handle. For example, a messaging app would likely register to support the Messages domain, and the intent to send a message. Siri handles all of the user interaction, including the voice and natural language recognition, and works with your extension to get information and handle user requests.

So, they scaled it back and are rebuilding it. I’m not a developer but my understanding of why they’ve done this is, in part, to allow for a more varied and natural use of language. But as with all things internet and human, people often don’t want to be bothered with the details. They want what they want and they want it yesterday. In contrast to Apple’s handling of Siri we have Amazon which has it’s pedal to the floor.

Roemmele goes on to discuss the rapid emergence of Amazon’s Echo ecosystem and the growth of Alexa. Within the context of this podcast and what I’ve seen of his writing, much of his interest and background seems centered on commerce and payment as they relate to voice. That said, I’m just not that interested in what he calls “voice commerce”. I order from Amazon maybe 6 times a year. Now and in the foreseeable future I get most of what I need from local merchants. That said, even when I do order online I do so visually. I would never order via voice because I have to look at details. Perhaps I would use voice to reorder certain items that need to be replaced such as toilet paper or toothpaste but that’s the extent of it.

What I’m interested in is how voice can be a part of the computing experience. There are those of us that use our computers for work. For the foreseeable future I see myself interacting with my iPad visually because I can’t update a website with my voice. I can’t design a brochure with my voice. I can’t update a spreadsheet with my voice. I can’t even write with my voice because my brain has been trained to write as I read on the screen what it is I’m writing.

But this isn’t the computing Roemmele is discussing. His focus is “voice first devices”, those that don’t even have screens, devices such as the Echo and the upcoming HomePod1. And the tasks he’s suggesting will be done by voice first computing are different. And this is where it get’s a bit murky.

Right now my use of Siri is via the iPhone, iPad, AppleWatch and AirPods. In the near future I’ll have Siri in the HomePod. How do I make the most of voice first computing? What are these tasks that Siri will be able to do for me and why is Roemmele so excited about voice first computing. The obvious stuff would be the sorts of things assistants such as Siri have been touted as being great for: asking about for the weather, adding things to reminders, setting alarms, getting the scores for our favorite sports ball teams and so on. I and many others have written about these sorts of things that Siri has been doing for several years now. But what about the less obvious capabilities?

At one point in the podcast the two discuss using voice for such things as sending text. I often use dictation when I’m walking to dictate a text into my phone when using Messages and I see the benefit of that. But dictation, whether it is dictating to Siri or directly to the Messages app or any other app, at least for me, requires an almost different kind of thinking. It may be that I am alone in this. But it is easier for me to write with my fingers on the keyboard then it is to “write” with my mouth through dictation. It might also be that this is just a matter of retraining my brain. I can see myself dictating basic notes and ideas. But I don’t see myself “writing” via dictation.

At another point Roemmele suggests that apps and devices will eventually disappear as they are replaced by voice. At this point I really have to draw a line. I think this is someone passionate about voice first going off the rails. I think he’s let his excitement cloud his thinking. Holding devices, looking, touching, swiping, typing and reading, these are not going away. He seems to want it both ways though at various points he acknowledges that voice first doesn’t replace apps so much as it is a shift in which voice becomes more important. That I can agree with. I think we’re already there.

Two last points. First, about the tech pundits. Too often people let their own agenda and preference color their predictions and analysis. The lines blur between their hopes and preferences and what is. No one knows the future but too often act as they do. It’s kinda silly.

Second, what seems to be happening with voice computing is simply that a new interface has suddenly become useful and it absolutely seems like magic. For those of us who are science fiction fans it’s a sweet taste of the future in the here and now. But, realistically, its usefulness is currently limited to very fairly trivial daily tasks mentioned above. Useful, convenient and delightful? Yes, absolutely. Two years ago I had to go through all the trouble of putting my finger on a switch, push a button or pull a little chain, now I can simply issue a verbal command. No more trudging through the effort of tapping the weather app icon on my screen, not for me. Think of all the calories I’ll save. I kid, I kid.

But really, as nice an addition as voice is, the vast majority of my time computing will continue to be with a screen. I don’t doubt that voice interactions will become more useful as the underlying foundation improves and I look forward to the improvements. As I’ve written many times, I love Siri and use it every day. I’m just suggesting that in the real world, adoption of the voice interface will be slower and less far reaching than many would like.


  1. Actually, technically, the HomePod technically has a screen but it’s not a screen in the sense that an iPhone has a screen. ↩︎

Hey Siri, give me the news

Ah, it’s just a little thing but it’s a little thing I’ve really wanted since learning of a similar feature on Alexa. In fact, I just mentioned it in yesterday’s post. We knew this was coming with HomePod and now it’s here for the iPad and iPhone too. Just ask Siri to give you the news and she’ll respond by playing a very brief NPR news podcast. It’s perfect, exactly what I was hoping for. I’ve already made it a habit in the morning, then around lunch and again in the evening.

Alexa Hype

A couple years ago a good friend got one of the first Alexa’s available. I was super excited for them but I held off because I already had Siri. I figured Apple would eventually introduce their own stationary speaker and I’d be fine til then. But as a big fan of Star Trek and Sci-fi generally, I love the idea of always present voice-based assistants that seem to live in the air around us.

I think he and his wife still use their Echo everyday in the ways I’ve seen mentioned elsewhere: playing music, getting the news, setting timers or alarms, checking the weather, controlling lights, checking the time, and shopping from Amazon. From what I gather that is a pretty typical usage for Echo and Google Home owners. That list also fits very well with how I and many people are using Siri. With the exception of getting a news briefing which is not yet a feature. As a Siri user I do all of those things except shop at Amazon.

The tech media has recently gone crazy over the pervasiveness of Alexa at the 2018 CES and the notable absence of Siri and Apple. Ah yes, Apple missed the boat. Siri is practically dead in the water or at least trying to catch-up. It’s a theme that’s been repeated for the past couple years. And really, it’s just silly.

Take this recent story from The Verge reporting on research from NPR and Edison Research

One in six US adults (or around 39 million people) now own a voice-activated smart speaker, according to research from NPR and Edison Research. The Smart Audio Report claims that uptake of these devices over the last three years is “outpacing the adoption rates of smartphones and tablets.” Users spent time using speakers to find restaurants and businesses, playing games, setting timers and alarms, controlling smart home devices, sending messages, ordering food, and listening to music and books.

Apple iOS devices with Siri are all over the planet rather than just the three or four countries the Echo is available in. Look, I think it’s great that the Echo exists for people that want to use it. But the tech press needs to pull it’s collective head out of Alexa’s ass and find the larger context and a balance in how it discusses digital assistants.

Here’s another bit from the above article and research:

The survey of just under 2,000 individuals found that the time people spend using their smart speaker replaces time spent with other devices including the radio, smart phone, TV, tablet, computer, and publications like magazines. Over half of respondents also said they use smart speakers even more after the first month of owning one. Around 66 percent of users said they use their speaker to entertain friends and family, mostly to play music but also to ask general questions and check the weather.

I can certainly see how a smart speaker is replacing radio as 39% reported in the survey. But to put the rest in context, it seems highly doubtful that people are replacing the other listed sources with a smart speaker. Imagine a scenario where people have their Echo playing music or a news briefing. Are we to believe that they are sitting on a couch staring at a wall while doing so? Doing nothing else? No. The question in the survey: “Is the time you spend using your Smart Speaker replacing any time you used to spend with…?”

So, realistically, the smart speaker replaces other audio devices such as radio but that’s it. People aren’t using it to replace anything else in that list. An Echo, by it’s very nature, can’t replace things which are primarily visual. As fantastic as Alexa is for those that have access to it, for most users it still largely comes down to that handful of uses listed above. In fact, in another recent article on smart speakers, The New York Times throws a bit of cold water on the frenzied excitement: Alexa, We’re Still Trying to Figure Out What to Do With You

The challenge isn’t finding these digitized helpers, it is finding people who use them to do much more than they could with the old clock/radio in the bedroom.

A management consulting firm recently looked at heavy users of virtual assistants, defined as people who use one more than three times a day. The firm, called Activate, found that the majority of these users turned to virtual assistants to play music, get the weather, set a timer or ask questions.

Activate also found that the majority of Alexa users had never used more than the basic apps that come with the device, although Amazon said its data suggested that four out of five registered Alexa customers have used at least one of the more than 30,000 “skills” — third-party apps that tap into Alexa’s voice controls to accomplish tasks — it makes available.

Now, back to all the CES related news of the embedding of Alexa in new devices and/or compatibility. I’ve not followed it too closely but I’m curious about how this will actually play out. First, of course, there’s the question of which of these products actually eventually make it to market. CES announcements are notorious for being just announcements for products that never ship or don’t ship for years into the future. But regardless, assuming many of them do, I’m just not sure how it all plays out.

I’m imagining a house full of devices many of which have microphones and Alexa embedded in them. How will that actually work? Is the idea to have Alexa, as an agent that listens and responds as she currently does in a speaker, but also in all of the devices be they toilets, mirrors, refrigerators… If so, that seems like overkill and unnecessary costs. Why not just the smart speaker hub that then intelligently connects to devices? Why pay extra for a fridge with a microphone if I have another listening device 10 feet away? This begins to seem a bit comical.

Don’t get me wrong, I do see the value of increasing the capabilities of our devices. I live in rural Missouri and have a well house heater 150 feet away from my tiny house. I now have it attached to a smart plug and it’s a great convenience to be able to ask Siri to turn it off and on when the weather is constantly popping above freezing only to drop below freezing 8 hours later. It’s also very nice to be able to control lights and other appliances with my voice, all through a common voice interface.

But back to CES, the tech press and the popular narrative that Alexa has it all and that Siri is missing out, I just don’t see it. A smart assistant, regardless of the device it lives in, exists to allow us to issue a command or request, and have something done for us. I don’t yet have Apple’s HomePod because it’s not available. But as it is now, I have a watch, an iPhone and two iPads which can be activated via “Hey Siri”. I do this in my home many times a day. I also do it when I’m out walking my dogs. Or when I’m driving or visiting friends or family. I can do it from a store or anywhere I have internet. If we’re going to argue about who is missing out, the Echo and Alexa are stuck at home while Siri continues to work anywhere I go.

So, to summarize, yes, stationary speakers are great in that their far-field microphones work very well to perform a currently limited series of tasks which are also possible with the near-field mics found in iPhones, iPads, AirPods and the AppleWatch. The benefit of the stationary devices are accurate responses when spoken to from anywhere in a room. A whole family can address an Echo whereas only individuals can address Siri in their personal devices and have to be near their phone to do so. Or in the case of wearables such as AirPods or AppleWatch, they have to be on person. By contrast, these stationary devices are useless when we are away from the home when we have mobile devices that still work.

My thought is simply this, contrary to the chorus of the bandwagon, all of these devices are useful in various ways and in various contexts. We don’t have to pick a winner. We don’t have to have a loser. Use the ecosystem(s) that works best for you If it’s Apple and Amazon enjoy them both and use the devices in the scenarios where they work best. If it’s Amazon and Google, do the same. Maybe it’s all three. Again, these are all tools, many of which compliment each other. Enough with the narrow, limiting thinking that we have to rush to the pronouncement of a winner.

Personally, I’m already deeply invested in the Apple ecosystem and I’m not a frequent Amazon customer so I’ve never had a Prime membership. I’m on a limited budget so I’ve been content to stick with Siri on my various mobile devices and wait for the HomePod. But if I were a Prime member I would have purchased an Echo because it would have made sense for me. When the HomePod ships I’ll be first in line. I see the value of a great sounding speaker with more accurate microphones that will give me an even better Siri experience. I won’t be able to order Amazon products with the HomePod but I will have a speaker with fantastic audio playback and Siri which is a trade off I’m willing to make.

HomePod and the Siri Ecosystem

homepod-white-shelf

I’ve recently written about my hopes for a more proactive Siri. I’ve written about Siri quite a bit over the past couple years and it’s been mostly positive. Frankly, I think “she” is pretty fantastic and I call upon her many times a day. I remember when a friend first showed me his Echo a couple years back. I instantly wanted a Siri powered speaker by Apple. I’ve been waiting ever since. I will buy the HomePod the day it becomes available. No hesitation and with the same excitement and for the same reasons as I bought the AirPods the minute they were available: music, podcasts, Siri.

But much of the tech press has another take on the voice assistant market. Over the past couple of years it’s become fashionable in the tech media, especially among the Apple nerds that love to pride themselves on their very high standards, to complain about Siri while holding up high Alexa and the Echo1.

Interestingly, Pew has recently come out with an article on the very issue of voice assistants by Americans. Not too surprisingly 46% use voice assistants and of that 42% access via smartphone. 14% access via computer or tablet. Only 8% access via a stand-alone device such as an Echo. My own experience and observation of family mirror this. Practically all of my extended family have and use Siri on a myriad of devices on a daily basis. But in that same group the Echo is only in one household.

There are a few exceptions to the common chorus of the bandwagon and two of my favorites are Daniel Erin Dilger and Neil Cybart. They offer a more mature, big picture analysis. It’s less about whether or not they are personally pleased with how a product suits them but more about the larger context and trends. They seem to do a much better job of taking into account how the potential interactions of the larger public will play out.

I agree with their recent analysis of the HomePod, digital assistants and devices. Their posts were less about Siri and more about the varied form factors of devices through which digital assistants are accessed as well as the larger function of those devices.

First, Daniel Erin Dilger over at AppleInsider recently discussed the intent of Apple’s upcoming HomePod. I agree with his take on it, nicely summarized by the article title, Apple’s HomePod isn’t about Siri, but rather the future of home audio. He does a great job of digging into the difference in the intended function of the devices. Specifically, the role of the devices in the home. This bit comparing the audio quality of HomePod to the Echo made me giggle:

It’s an emotional experience, which is exactly what Apple has been increasingly pursuing as it enhances its products. Amazon Alexa isn’t an emotional experience; it’s an intellectual one. It’s a polite conversation with a librarian who moonlights as a sales agent at an online warehouse and plays songs with the fidelity of a clock radio.”

Over at Above Avalon Neil Cybart recently wrote a post in which he explores, in part, the bias of the media in relation to the perceived success of the platforms and ecosystems. As he has in the past, he does an excellent job of providing some context about where the market is at based on numbers rather than the din emanating from the excited bandwagon.

We are in the midst of a massive mindshare bubble involving stationary smart speakers in the home. While the press talk up the category with near breathless enthusiasm and positivity, there is a growing amount of evidence that stationary smart speakers powered by digital voice assistants do not represent a paradigm shift in computing. Instead, the stationary smart speaker’s future is one of an accessory, and it will be surpassed in prominence by wearables. It’s time to call out the stationary smart speaker market for what it is: a mirage.

On more than one occasion Neil has compared the consistent praise heaped upon the Echo as compared to the criticism put upon the Apple Watch which, in terms of sales, has not only sold more units but at a greater profit. Amazon is practically giving away its Echo devices. Put another way, in terms of the number of form factors offered by each ecosystem, Apple’s is far more diverse and as a result, more useful and it’s also making a profit for the company.

Another of my favorites and the host of the Vector Podcast, Rene Ritchie, has recently covered the topic with guests in two different episodes.

The first, in which Rene interviewed Jan Dawson in episode 18, mirrored the points made Neil Cybart. Jan points out that contrary to the fixation on the Echo, it’s actually Siri that has the largest number of users. The only way that Apple appears to be behind is if we focus only on home speaker hardware such as the Echo, a market Apple has yet to enter but will enter in early 2018. But he correctly points out that Alexa-based hardware, as a share of the voice assistant market, is actually very small. He also makes the point that voice, as a computer UI, is still only one in a larger pool of UIs and that it is often not appropriate for use in many settings.

In the second, Rene interviews Ben Bajarin in episode 35. In this second interview Ben Bajarin disagrees somewhat the the above three takes. He suggests that there is a large and growing market for what he calls ambient computing. He suggests that Google is in the weakest position with search being it’s differentiator. Amazon is in a very good position with it’s differentiator being commerce. He goes on to say that Apple is very much a part of the market and that it’s differentiator is communications and service. He also suggests that while Apple is already in the game he looks for them to do more a lot more.

In the second half of the interview there is a great conversation about Siri’s future. Ben Contends that Siri is already quite good and that the general public is largely satisfied with it. They then delve into machine learning for Siri and the line that Apple is walking in regards to differential privacy as a technique for collecting large-scale social data and how this contrasts with more personalized data as the base for machine learning. It’s a great conversation.

I’m with Ben that I hope Apple can figure out how to better personalize it’s approach. In short, many of us trust Apple with our data and would like Apple to use it for a more fine-tuned machine learning for individuals. Their discussion concludes on the problem of making Siri in the HomePod work in the context of families. How can a communal device be used in both a communal context and an personal one? Or can it?

It will be interesting to see how the Apple’s Siri ecosystem evolves. While Amazon is offering various Echo devices all of which are tied to the home2, Apple offers global distribution in many languages in form factors ranging from wearables to pocketable to carried and soon, a stationary, home device. In the Apple ecosystem I am connected to Siri and thus, a variety of services, all of the time. Or, more precisely, anywhere I have an internet connection which, these days is almost everywhere I am. In other words, Siri is already providing, to a great degree, ambient voice-controlled computing.

This plethora of devices has me covered in a variety of circumstances but I’m curious to see how that works in the home with the HomePod. I’ll have an iPhone and iPad sitting within range all the time. The watch on my wrist and the HomePod somewhere in the room. What will happen when I say “Hey Siri”? I’m just one person in a tiny house. I’m also curious what happens when you have 4 or 5 family members each with devices in a larger house. Currently Siri does a good job of responding only to the device owner and even does a pretty good job of responding with the right device when several “Hey Siri” devices are within range. I assume this will be no different when the HomePod is introduced.

The new year is around the corner and with it the HomePod. I’m looking forward to trying it out. I don’t doubt that Apple is working on making all of it’s Siri devices work well together. I’ve got a spot on my shelf waiting for the new arrival.


  1. I‘ve come to dislike the Apple-oriented echo-chamber because they seem to approach every new thing from a very narrow perspective. To my ears and eyes, it comes off as snobbery. ↩︎
  2. And just a handful of countries ↩︎

A Proactive Siri

First, just a few notes about my Siri usage. I use Siri daily, many times a day calling on “her” from every iOS device I own. I’ve always wanted the pervasive, ever present computer from Star Trek. Siri is one of my favorite iOS features. I tend to activate her via a mix of “Hey Siri” and tapping the AirPods and the keyboard on the iPad. Most of my usage is Homekit-related, adding calendar and to-do items, music control and a good number of informational queries such as conversions, spelling, and factoid searches. But with Siri, and to my knowledge, all other voice assistants, the user makes a request first. It occurred to me that there are two times a day when I would like Siri to initiate a “session”.

Imagine starting your morning with a new, more pro-active Siri. Essentially, I’d like a Siri wake-up greeting. Combine the iOS morning alarm, perhaps couple of chimes followed by Siri offering a few configured options such as weather, calendar review, and the morning news. Eve better, what about “Siri Scenes” integrated with the Home App? Add a light or other devices to the above scene?

Then again, imagine a Siri “Goodnight” scene. Because night time might vary a bit more this might be scheduled and again, start off with a couple of gentle chimes and then Siri would ask if I’m ready for bed. At that point I might say in twenty minutes which would set a reminder. 20 Minutes later again the gentle chimes followed by Siri reading the forecast for tomorrow, events, reminders, turning off lights, etc. Then end with “would you like any music?” Or perhaps I’m the only one in the habit of going to bed to music? So this would be a set automation which would kickoff at my usual bedtime but which could be delayed by anytime I respond with. If I’m working late I might say “give me 30 minutes” or check back at 10.

Wake-up and bed-time are two times a day when we have fairly consistent, repeatable routines and it seems an idea opportunity for Apple to dip into the possibility of offering a new level of Siri engagement. I think it would be a great experience.

The AirPods: Siri Everywhere!

Much has been made over the past year about Amazon's Alexa and Google's equivalent which are both available in different forms on different devices. In that process many have taken the opportunity to criticize Apple's Siri, many suggesting that Apple has fallen behind. I've written before about my fondness for Siri and the many ways I've found "her" useful over the past couple years. Perhaps the two things that the Echo has become most noted for are excellent accuracy in understanding dictation and the ever growing list of available skills. I've no first hand experience so I can't say much other than to acknowledge that yes indeed the list of "skills" is quite large and seemingly growing all the time. That said, at this moment, the Echo is also very limited in terms of availability in other countries. It's also generally mostly useful in the home.

I'll agree that my iPad and iPhone have not been perfectly accurate when I use Siri. I think I'd peg the accuracy at about 70% or a wee bit above that. It has worked well enough that I've continued to use it fairly often and have been generally happy with the results. With the new AirPods I'm seeing this greatly improved. Not only that, I am also finding that the AirPods are comfortable enough that they disappear into the background. Which is to say that while I'm aware that I have them in my ears I'm not distracted by them and so I tend to wear them far longer than any other headphone I've owned. In fact I'm leaving them in for much of the day with the exception of charging times.

I'm beginning to think of the AirPods as a persistent extension of Siri and I'd guess that Apple hopes this is the case for many who purchase the AirPods. I can certainly say that when I purchased them much of my interest was directly related to using Siri. Sure, I listen to music and podcasts daily and these are fine for both. But what I really wanted was an always present Siri that would more accurately understand my requests and do so more quickly than with my other bluetooth headphones or interaction with the phone directly. I've not been disappointed.

15 years ago I was that nerd that used "Speakable Items" on the Mac. It didn't work very well for me. But I tried. I've no doubt that more than one of my roommates at the time face palmed as they walked by my room as I alternated between patient talking and near shouting as I tried to interact with my Mac by voice. Well, here we are. It's 2017 and this is not yet the intelligent, ever present computer from Star Trek nor is it the AI found in the movie Her but the AirPods with Siri are a step in the right direction.

Until I had the AirPods I'd been hoping for a stationary device like the Echo but no longer. Assuming I have the AirPods in my ear and my iPhone within 60 feet I can, in all likelihood, make a request of Siri that will be successfully answered. In many ways this feels like the best of both worlds: the Echo/Google Home living room device and the mobile Siri model of Apple. When I'm at home I have the freedom to roam with or without the iPhone and still have Siri. When I get out for a walk or errands in town I take the iPhone and continue to have Siri.

Siri is far from perfect and there is much room for improvement mainly in that I'd like an expansion of what "she" can do for me. I don't doubt that Apple is working on this and that we will see a constant expansion of the things that the OS and third party apps can do. The AirPods and Siri feel like the future. Like the iPhone and iPad, they are the tech of science fiction being born into the present.