Monthly Archives: January 2018

Siri and voice first

In a recent episode of his Vector podcast, Rene Ritchie had “voice first” advocate Brian Roemmele. Rene is probably my current favorite Apple blogger and podcaster and Vector is excellent.

As I listened to this episode I found myself nodding along for much of it. Roemmele is very passionate about voice first computing and certainly seems to know what he’s talking about. In regards to Siri, his primary argument seems to be that Apple made a mistake in holding Siri back after purchasing the company. At the time Siri was an app and had many capabilities that it no longer has. Rather than take Siri further and developing it into a full-fledged platform Apple reigned it in and took a more conservative approach. In the past couple of years it has been adding back in, via Siri Kit, what it calls domains.

Apps adopt SiriKit by building an extension that communicates with Siri, even when your app isn’t running. The extension registers with specific domains and intents that it can handle. For example, a messaging app would likely register to support the Messages domain, and the intent to send a message. Siri handles all of the user interaction, including the voice and natural language recognition, and works with your extension to get information and handle user requests.

So, they scaled it back and are rebuilding it. I’m not a developer but my understanding of why they’ve done this is, in part, to allow for a more varied and natural use of language. But as with all things internet and human, people often don’t want to be bothered with the details. They want what they want and they want it yesterday. In contrast to Apple’s handling of Siri we have Amazon which has it’s pedal to the floor.

Roemmele goes on to discuss the rapid emergence of Amazon’s Echo ecosystem and the growth of Alexa. Within the context of this podcast and what I’ve seen of his writing, much of his interest and background seems centered on commerce and payment as they relate to voice. That said, I’m just not that interested in what he calls “voice commerce”. I order from Amazon maybe 6 times a year. Now and in the foreseeable future I get most of what I need from local merchants. That said, even when I do order online I do so visually. I would never order via voice because I have to look at details. Perhaps I would use voice to reorder certain items that need to be replaced such as toilet paper or toothpaste but that’s the extent of it.

What I’m interested in is how voice can be a part of the computing experience. There are those of us that use our computers for work. For the foreseeable future I see myself interacting with my iPad visually because I can’t update a website with my voice. I can’t design a brochure with my voice. I can’t update a spreadsheet with my voice. I can’t even write with my voice because my brain has been trained to write as I read on the screen what it is I’m writing.

But this isn’t the computing Roemmele is discussing. His focus is “voice first devices”, those that don’t even have screens, devices such as the Echo and the upcoming HomePod1. And the tasks he’s suggesting will be done by voice first computing are different. And this is where it get’s a bit murky.

Right now my use of Siri is via the iPhone, iPad, AppleWatch and AirPods. In the near future I’ll have Siri in the HomePod. How do I make the most of voice first computing? What are these tasks that Siri will be able to do for me and why is Roemmele so excited about voice first computing. The obvious stuff would be the sorts of things assistants such as Siri have been touted as being great for: asking about for the weather, adding things to reminders, setting alarms, getting the scores for our favorite sports ball teams and so on. I and many others have written about these sorts of things that Siri has been doing for several years now. But what about the less obvious capabilities?

At one point in the podcast the two discuss using voice for such things as sending text. I often use dictation when I’m walking to dictate a text into my phone when using Messages and I see the benefit of that. But dictation, whether it is dictating to Siri or directly to the Messages app or any other app, at least for me, requires an almost different kind of thinking. It may be that I am alone in this. But it is easier for me to write with my fingers on the keyboard then it is to “write” with my mouth through dictation. It might also be that this is just a matter of retraining my brain. I can see myself dictating basic notes and ideas. But I don’t see myself “writing” via dictation.

At another point Roemmele suggests that apps and devices will eventually disappear as they are replaced by voice. At this point I really have to draw a line. I think this is someone passionate about voice first going off the rails. I think he’s let his excitement cloud his thinking. Holding devices, looking, touching, swiping, typing and reading, these are not going away. He seems to want it both ways though at various points he acknowledges that voice first doesn’t replace apps so much as it is a shift in which voice becomes more important. That I can agree with. I think we’re already there.

Two last points. First, about the tech pundits. Too often people let their own agenda and preference color their predictions and analysis. The lines blur between their hopes and preferences and what is. No one knows the future but too often act as they do. It’s kinda silly.

Second, what seems to be happening with voice computing is simply that a new interface has suddenly become useful and it absolutely seems like magic. For those of us who are science fiction fans it’s a sweet taste of the future in the here and now. But, realistically, its usefulness is currently limited to very fairly trivial daily tasks mentioned above. Useful, convenient and delightful? Yes, absolutely. Two years ago I had to go through all the trouble of putting my finger on a switch, push a button or pull a little chain, now I can simply issue a verbal command. No more trudging through the effort of tapping the weather app icon on my screen, not for me. Think of all the calories I’ll save. I kid, I kid.

But really, as nice an addition as voice is, the vast majority of my time computing will continue to be with a screen. I don’t doubt that voice interactions will become more useful as the underlying foundation improves and I look forward to the improvements. As I’ve written many times, I love Siri and use it every day. I’m just suggesting that in the real world, adoption of the voice interface will be slower and less far reaching than many would like.


  1. Actually, technically, the HomePod technically has a screen but it’s not a screen in the sense that an iPhone has a screen. ↩︎

Panic, Transmit and Keeping My Options Open

I’ve been coding websites for the web since 1999 and doing it for clients since 2002. I started using Coda for Mac when the first version came out and when Transmit and Coda became available for iOS I purchased both. When I transitioned to the iPad as my primary computer in 2016 those two apps became the most important on my iPad. But no more.

A couple weeks ago Panic announced that they would no longer be developing Transmit for iOS. They’d hinted in a blog post a year or two ago that iOS development was shaky for them. They say though that Coda for iOS will continue. But I’m going to start trying alternative workflows. In fact, I’ve already put one in place and will be using it for the foreseeable future. Why do this if Coda still works and has stated support for the future?

I’m not an app developer. I’m also not an insider at Panic. But as a user, I find it frustrating that we are over three full months since the release of iOS 11 and seven months since WWDC and Panic’s apps still do not support drag and drop in iOS 11. Plenty of other apps that I use do. I find myself a bit irritated that Panic occupies this pedestal in the Apple nerd community. It’s true that their apps are visually appealing. Great. I agree. But how’s about we add support for important functionality? I really love Coda and Transmit but I just don’t feel the same about Panic as a company. Sometimes it seems like they’ve got plenty of time and resources for whimsy (see their blog for posts about their sign and fake photo company) and that’s great I guess. I guess as a user that depends on their apps I’d rather they focus on the apps. I’m on the outside looking in and it’s their company to do as they please. But as a user I’ll have an opinion based on the information I have. And though they’ve said Coda for iOS will continue, it’s time to test other options.

I’ve been using FileBrowser for three years just as a way to access local files on my MacMini. I’d not thought much about how it might be used as my FTP client for website management in conjunction with Apple’s new Files app. Thanks to Federico’s recent article on FTP clients I was reminded that FileBrowser is actually a very capable ftp app. So, I set-up a couple of my ftp accounts. With this set-up I can easily access my servers on one side of my split screen via FileBrowser and my “local” iCloud site folders in Files on the other side. I really like the feel of it. The Files app is pretty fantastic and being able to rely on that in this set-up is a big plus. It feels more open which brings me to the next essential element in this process: editing html files.

One of my frustrations with Coda and Transmit was that my “local” files were stuck in a shared Coda/Transmit silo. Nice that they were interchangeable between the two but I could not locate them in DropBox or iCloud. With this new set-up I needed a text editor that could work from iCloud as a local file storage. I’ve got two options that I’m starting with, both have built in ftp as well as iCloud as a file storage option. Textastic is my current favorite. Another is GoCoEdit. Both have built in preview or the option to use Safari as a live preview. So, as of now, I open my coding/preview space and use a split between Textastic and Safari. I haven’t used Textastic enough to have a real opinion about how it feels as an editor when compared to Coda’s editor. But thus far it feels pretty good. My initial impression is that navigation within documents is a bit snappier and jumping between documents using the sidebar is as fast as Coda’s top tabs.

So, essentially, this workflow is relying on four apps in split screen mode in two spaces. One space is for file transfer, the other is for coding/previewing. Command Tab gets me quickly back and forth between them. I often get instructions for changes via email or Messages. Same for files such as pdfs and images. In those cases it is easy enough to open Mail or Messages as a third slide over app that I can refer to as I edit or for drag and drop into Files/FileBrowser.

It’s only been a few days with this new 4 app workflow but in the time I’ve used it I like it a lot. I get drag and drop and synched iCloud files (which also means back-up files thanks to the Mac and Time Machine).

Hey Siri, give me the news

Ah, it’s just a little thing but it’s a little thing I’ve really wanted since learning of a similar feature on Alexa. In fact, I just mentioned it in yesterday’s post. We knew this was coming with HomePod and now it’s here for the iPad and iPhone too. Just ask Siri to give you the news and she’ll respond by playing a very brief NPR news podcast. It’s perfect, exactly what I was hoping for. I’ve already made it a habit in the morning, then around lunch and again in the evening.

Alexa Hype

A couple years ago a good friend got one of the first Alexa’s available. I was super excited for them but I held off because I already had Siri. I figured Apple would eventually introduce their own stationary speaker and I’d be fine til then. But as a big fan of Star Trek and Sci-fi generally, I love the idea of always present voice-based assistants that seem to live in the air around us.

I think he and his wife still use their Echo everyday in the ways I’ve seen mentioned elsewhere: playing music, getting the news, setting timers or alarms, checking the weather, controlling lights, checking the time, and shopping from Amazon. From what I gather that is a pretty typical usage for Echo and Google Home owners. That list also fits very well with how I and many people are using Siri. With the exception of getting a news briefing which is not yet a feature. As a Siri user I do all of those things except shop at Amazon.

The tech media has recently gone crazy over the pervasiveness of Alexa at the 2018 CES and the notable absence of Siri and Apple. Ah yes, Apple missed the boat. Siri is practically dead in the water or at least trying to catch-up. It’s a theme that’s been repeated for the past couple years. And really, it’s just silly.

Take this recent story from The Verge reporting on research from NPR and Edison Research

One in six US adults (or around 39 million people) now own a voice-activated smart speaker, according to research from NPR and Edison Research. The Smart Audio Report claims that uptake of these devices over the last three years is “outpacing the adoption rates of smartphones and tablets.” Users spent time using speakers to find restaurants and businesses, playing games, setting timers and alarms, controlling smart home devices, sending messages, ordering food, and listening to music and books.

Apple iOS devices with Siri are all over the planet rather than just the three or four countries the Echo is available in. Look, I think it’s great that the Echo exists for people that want to use it. But the tech press needs to pull it’s collective head out of Alexa’s ass and find the larger context and a balance in how it discusses digital assistants.

Here’s another bit from the above article and research:

The survey of just under 2,000 individuals found that the time people spend using their smart speaker replaces time spent with other devices including the radio, smart phone, TV, tablet, computer, and publications like magazines. Over half of respondents also said they use smart speakers even more after the first month of owning one. Around 66 percent of users said they use their speaker to entertain friends and family, mostly to play music but also to ask general questions and check the weather.

I can certainly see how a smart speaker is replacing radio as 39% reported in the survey. But to put the rest in context, it seems highly doubtful that people are replacing the other listed sources with a smart speaker. Imagine a scenario where people have their Echo playing music or a news briefing. Are we to believe that they are sitting on a couch staring at a wall while doing so? Doing nothing else? No. The question in the survey: “Is the time you spend using your Smart Speaker replacing any time you used to spend with…?”

So, realistically, the smart speaker replaces other audio devices such as radio but that’s it. People aren’t using it to replace anything else in that list. An Echo, by it’s very nature, can’t replace things which are primarily visual. As fantastic as Alexa is for those that have access to it, for most users it still largely comes down to that handful of uses listed above. In fact, in another recent article on smart speakers, The New York Times throws a bit of cold water on the frenzied excitement: Alexa, We’re Still Trying to Figure Out What to Do With You

The challenge isn’t finding these digitized helpers, it is finding people who use them to do much more than they could with the old clock/radio in the bedroom.

A management consulting firm recently looked at heavy users of virtual assistants, defined as people who use one more than three times a day. The firm, called Activate, found that the majority of these users turned to virtual assistants to play music, get the weather, set a timer or ask questions.

Activate also found that the majority of Alexa users had never used more than the basic apps that come with the device, although Amazon said its data suggested that four out of five registered Alexa customers have used at least one of the more than 30,000 “skills” — third-party apps that tap into Alexa’s voice controls to accomplish tasks — it makes available.

Now, back to all the CES related news of the embedding of Alexa in new devices and/or compatibility. I’ve not followed it too closely but I’m curious about how this will actually play out. First, of course, there’s the question of which of these products actually eventually make it to market. CES announcements are notorious for being just announcements for products that never ship or don’t ship for years into the future. But regardless, assuming many of them do, I’m just not sure how it all plays out.

I’m imagining a house full of devices many of which have microphones and Alexa embedded in them. How will that actually work? Is the idea to have Alexa, as an agent that listens and responds as she currently does in a speaker, but also in all of the devices be they toilets, mirrors, refrigerators… If so, that seems like overkill and unnecessary costs. Why not just the smart speaker hub that then intelligently connects to devices? Why pay extra for a fridge with a microphone if I have another listening device 10 feet away? This begins to seem a bit comical.

Don’t get me wrong, I do see the value of increasing the capabilities of our devices. I live in rural Missouri and have a well house heater 150 feet away from my tiny house. I now have it attached to a smart plug and it’s a great convenience to be able to ask Siri to turn it off and on when the weather is constantly popping above freezing only to drop below freezing 8 hours later. It’s also very nice to be able to control lights and other appliances with my voice, all through a common voice interface.

But back to CES, the tech press and the popular narrative that Alexa has it all and that Siri is missing out, I just don’t see it. A smart assistant, regardless of the device it lives in, exists to allow us to issue a command or request, and have something done for us. I don’t yet have Apple’s HomePod because it’s not available. But as it is now, I have a watch, an iPhone and two iPads which can be activated via “Hey Siri”. I do this in my home many times a day. I also do it when I’m out walking my dogs. Or when I’m driving or visiting friends or family. I can do it from a store or anywhere I have internet. If we’re going to argue about who is missing out, the Echo and Alexa are stuck at home while Siri continues to work anywhere I go.

So, to summarize, yes, stationary speakers are great in that their far-field microphones work very well to perform a currently limited series of tasks which are also possible with the near-field mics found in iPhones, iPads, AirPods and the AppleWatch. The benefit of the stationary devices are accurate responses when spoken to from anywhere in a room. A whole family can address an Echo whereas only individuals can address Siri in their personal devices and have to be near their phone to do so. Or in the case of wearables such as AirPods or AppleWatch, they have to be on person. By contrast, these stationary devices are useless when we are away from the home when we have mobile devices that still work.

My thought is simply this, contrary to the chorus of the bandwagon, all of these devices are useful in various ways and in various contexts. We don’t have to pick a winner. We don’t have to have a loser. Use the ecosystem(s) that works best for you If it’s Apple and Amazon enjoy them both and use the devices in the scenarios where they work best. If it’s Amazon and Google, do the same. Maybe it’s all three. Again, these are all tools, many of which compliment each other. Enough with the narrow, limiting thinking that we have to rush to the pronouncement of a winner.

Personally, I’m already deeply invested in the Apple ecosystem and I’m not a frequent Amazon customer so I’ve never had a Prime membership. I’m on a limited budget so I’ve been content to stick with Siri on my various mobile devices and wait for the HomePod. But if I were a Prime member I would have purchased an Echo because it would have made sense for me. When the HomePod ships I’ll be first in line. I see the value of a great sounding speaker with more accurate microphones that will give me an even better Siri experience. I won’t be able to order Amazon products with the HomePod but I will have a speaker with fantastic audio playback and Siri which is a trade off I’m willing to make.

Brydge Keyboard Update

It’s been almost two months of using the Brydge keyboard. It seems to be holding up very well in that short time. The only defect I’ve discovered is the right most edge of the space bar does not work. My thumb has to be at least a half inch over to activate a press. Not a deal breaker but it is something I’ve had to adjust.

Also, something positive that I’ve discovered. The Brydge hinges rotate all the way to a parallel position with the keyboard. In other words, the iPad rotates all the way to no angle at all, it just sort of opens all the way, level with the keyboard. I initially thought this would be useless. Why would I ever want to do this?

As it turns out, it does indeed come in very handy. When I’m lounging on the futon to read I can put the iPad in this position and let the keyboard, resting in my lap or on a pillow next to me, serve as a stand to elevate the iPad to eye level. I don’t have to look down towards my lap as one does with a standard laptop. Instead, the iPad seemingly floats in front of my face. It’s actually kind of fantastic and a very comfortable position for reading. And interestingly, it balances perfectly. I barely have to hold the iPad or the keyboard. It’s kinda weird actually. I just lightly grasp the pair right above the keyboard and use my thumb to scroll. I can also easily shift my right hand down to the arrow keys to scroll via keyboard while browsing or reading. If I need to do some real typing the motion to fold the two into a normal laptop position is fluid and natural, taking less than a second. No doubt this has been a very nice surprise feature.