Revisiting iTunes with HomePod

Like many I’ve been using iTunes since it’s first versions. Over the past year that use dwindled a great deal as my music playing was mostly via Music on an iOS device. And in the couple years before that I’d been using Plex on iOS devices and AppleTV to access my iTunes library on the Mac because, frankly, the home sharing was pretty crappy. Alternatively, I would also use the remote app on an iOS devices to control iTunes on the Mac which also worked pretty well. The downside was that I didn’t have a decent speaker. I alternated between various (and cheap) computer and/or bluetooth speakers and the built-in TV speakers. None of them were great but they were tolerable. I live in a fairly small space, a “tiny house” so even poor to average speakers sound okay.

Today I’ve got the HomePod and after a year of enjoying Apple Music on iPads and iPhone I’ve added lots of music that I usually just stream, often from my recently played or heavy rotation lists. But two things have surfaced now that I’ve been using HomePod for a few days. First, as I mentioned in my review of HomePod, I’m not very good at choosing music without a visual cue. Second, I live in a rural location and when my satellite data allotment runs out streaming Apple Music becomes less dependable. Sometimes it’s fine. Sometimes not. Such was the case last night. So, after a year away from the iTunes library on my Mac I opened the Apple Remote app. I set the output for iTunes to the HomePod and spent some time with my “old” music all streaming to the best speaker I’ve ever owned. So nice.

This morning it occurred to me that while I’m back on my “bonus” data (2am-8am) I should consider downloading some of the music I’ve added to my library over the past few months of discovery through Apple Music. And in looking at that list I see all that with each month or two, as I’ve discovered new music the previous new discovery’s roll out of my attention span. There are “new” things I discovered 5 months ago that I enjoyed but then forgot. It’s a great problem to have! So I’ve spent the morning downloading much of the music I’ve added to my library over the past year. I never intended to actually download any of it as the streaming has worked so well. But with the HomePod I see now that keeping my local iTunes/Music library up-to-date has great benefit.

Remote App

So, how well does this new music playing process work? I rarely touch my Mac. It’s a server and I use it for a few design projects that I cannot do on my iPad. So, as mentioned above, I’ve been using the original iOS “Remote” app which opens up an iTunes like interface and which works very well to choose music on the Mac which plays via AirPlay to HomePod. Of course I can still use Siri on the HomePod to do all of its normal features. The only thing I do on the Remote app is choose the music. Apple’s not done much with the interface of that app so it looks pretty dated at this point. Actually, it looks very much like iTunes but a slightly older version of iTunes. But even so being able to easily browse by albums, artists, songs, playlists is very comfortable. It fits a little better to my lifelong habit of choosing music visually.

Why not just access my local iTunes music via the Home Sharing tab in Apple Music app on my iPad or iPhone which could then be AirPlayed to the HomePod? I guess this would  be the ideal as it would allow me to stay in the Apple Music app. Just as the Remote app allows for browsing my Mac’s music library so too does the Music app. But performance is horrendous. When I click the Home Sharing tab and then the tab for my Mac Mini I have to wait a minimum of a minute, sometimes more for the music to show up. Sometimes it never shows up. If I tap out of the Home Sharing library I have to wait again the next time I try to view it. It’s terrible. By contrast, the Remote app loads music instantly. There is, at most, a second of lag.

But what’s even worse is that the Music app does not even show any of the Apple Music I’ve downloaded to my local iTunes library. It really is a terrible experience and I’m not sure why Apple has done it this way. So, the Remote app wins easily as it actually let’s me play all of my new music and does so with an interface that updates instantly even if it is dated.

I suspect that my new routine will be to use Apple Music and discovery via Apple’s playlists and suggested artists when I’m out walking which is usually a minimum of an hour a day. My favorite discoveries will get downloaded to my local library and when at home I’ll spend more time accessing my iTunes library via the remote app. All in all, I suspect that I’ll be enjoying more of my library, old and new, with this new mix and of course, all of it on this great new speaker!

HomePod: Sometimes great, sometimes just grrrrrrrrrrr.

Tuesday Morning
I’m getting out of bed as two dogs eagerly await a trip outside which they know will be followed by breakfast. I ask Siri to play the Postal Service. She responds: “Here you go” followed by music by the Postal Service. The music is at about 50% volume. Nice. But in three full days of use I’m feeling hesitant about HomePod and the Siri within. And the next moment illustrates why. I slip on my shoes and jacket and ask Siri to Pause. The music continues. I say it louder and the iPhone across the room pipes up: “You’ll need to unlock your iPhone first.” I ignore the iPhone and look directly at HomePod (5 feet away) and say louder as I get irritated “Hey Siri, pause!” Nothing. She does not hear me (maybe she’s  enjoying the music?). By now, the magic is long gone and is replaced by frustration. I raise my voice to the next level which is basically shouting and finally HomePod responds and pauses the music. Grrr.

I go outside with my canine friends and upon return ask Siri to turn off the porch light. The iPhone across the room responds and the light goes off. I ask her to play and the HomePod responds and the Postal Service resumes. I get my coffee and iPad and sit down to finish off this review. I lay the iPhone face down so it will no longer respond to Hey Siri. Then I say, Hey Siri, set the volume to 40%. Nothing. I say it louder and my kitchen light goes off followed by Siri happily saying “Good Night Enabled”. Grrrrrrrrrrrr. I say Hey Siri loudly and wait for the music to lower then say “set the Kitchen light to 40%” and she does. The music resumes and I say Hey Siri and again I wait then I say “Play the Owls” and she does. I’d forgotten that I also wanted to lower the volume. But see how this all starts to feel like work? There’s nothing magical or enjoyable about this experience.

Here’s what I wrote Sunday morning as I worked on this review:

“When I ordered the HomePod I had no doubt I would enjoy it. Unlike so many that have bemoaned the missing features I was happy to accept it for what Apple said it was. A great sounding speaker with Apple Music and Siri. Simple.

It really is that simple. See how I did that? Apple offered the HomePod and I looked at the features and I said yes please.

I then proceeded to write a generally positive review which is below and which was based on my initial impressions based on 1.5 days of use. By Monday I’d edited to add in more details, specifically the few failures I’d had with Siri and the frustration of iPhone answering when I didn’t want it to.

I went into the HomePod expecting a very positive experience. And it’s mostly played out that way. But it’s interesting that by Tuesday morning my expectation of failure and frustration have risen. Not because HomePod is becoming worse. I’d say it’s more about the gradual accumulation of failures. They are the exception to the rule but happen often enough to create a persistent sense of doubt.

Set-up
As has been reported. It’s just like the AirPods. I was done in two minutes. I did nothing other than plug it in and put my phone next to it. I tapped three or four buttons and entered a password. Set-up could not possibly be any easier.

Siri
In a few days of use I’m happy to report that HomePod has performed very well. In almost every request I have made Siri has provided exactly what I asked. My hope and expectation would be that Siri on HomePod would hear my requests at normal room voice. While iPad and iPhone both work very well, probably at about 85% accuracy I have to be certain to speak loudly if I’m at a distance. Not a yell1, but just at or above normal conversational levels. With HomePod on a shelf in my tiny house, Siri has responded quickly and with nearly 100% accuracy and that’s with music playing at a fairly good volume. Not only do I not have to raise my voice, I’ve been careful to keep it at normal conversational tones or slightly lower. I’ll say that my level is probably slightly lower than what most people in the same room would easily understand with the music playing.

For the best experience with any iOS device I’ve learned not to wait for Siri. I just say Hey Siri and naturally continue with the rest of my request. This took a little practice because earlier on I think Siri required a slight pause or so it seemed. Not any more. But there’s no doubt, Siri is still makes mistakes even when requesting music which is supposedly her strongest skill set.

The first was not surprising. I requested music by Don Pullen, a jazz musician that a friend recommended. I’d never listened to him before and no matter how I said his name Siri just couldn’t get it. She couldn’t do it from iPhone or iPad either. Something about my pronunciation? I tried, probably 15 times with no success. I did however discover several artists with names that sound similar to Don Pullen. I finally turned on type to Siri and typed it in and sure enough, it worked. I expect there are other names, be they musicians or things outside of Music that Siri just has a hard time understanding. I’ve encountered it before but not too often. The upside, the next morning I requested Don Pullen and Siri correctly played Don Pullen. Ah, sweet relief. A sign that she is “learning”?

Another fail that seems like a learning process for Siri, the first time I requested REM Unplugged 1991/21: The Complete Sessions she failed because I didn’t have the full name. I just said REM Unplugged and she started playing a radio station for REM. When I said the album’s full name it worked. I went back a few hours later and just said REM Unplugged and it worked. Again, my hope is that she learns what it is I’m listening to so that in the future a long album name or a tricky artist name will not confuse her. Will see see how it plays out (literally!).

Yet another failure, and this one really surprised me. I’ve listened to the album “Living Room Songs” by Olafur Arnalds quite a bit. I requested Living Room Songs and she began playing the album Living Room by AJR. Never heard of it, never listened to it. So, that’s a BIG fail. There’s nothing difficult about understanding “Living Room Songs” which is an album in my “Heavy Rotation” list. That’s the worst kind of fail.

One last trouble spot worth mentioning. I have Hey Siri turned on on both my iPhone and Apple Watch. Most of the time the HomePod picks up but not always. On several occasions both the phone and watch have responded. I’ve gotten in the habit of keeping the phone face down but I shouldn’t have to remember to do that. I definitely see room for improvement on this.

I’ve requested the other usual things during the day with great success: the latest news, played the most recent episode of one of my regular podcasts, gotten the weather forecast, current temperature, sent a few texts, used various Homekit devices, checked the open hours of a local store and created a few reminders. It all worked the first time.

There were a couple of nice little surprises. In changing the volume, it’s possible to just request that it be “turned up a little bit” or “down a little bit”. I’m guessing that there is a good bit of that natural language knowledge built in and we only ever discover it by accident. Also, I discovered that when watching video on the AppleTV, if the audio is set to HomePod, Siri works for playback control so there’s no need for the Apple remote! This works very well. Not only can Siri pause playback but fast forward and rewind as well.

Audio Quality
Of course Apple has marketed HomePod first and foremost as a high quality speaker, a smart Siri speaker second. I agree with the general consensus that the audio quality is indeed superb. For music and as a sound system for my tv, I am very satisfied. My ears are not as well tuned as some so I don’t hear the details of the 3D “soundstage” that some have described. I subscribe to Apple Music so that’s all that matters to me and it works very well. Other services or third party podcast apps, can be played from a Mac or iOS device via AirPlay to HomePod. I also use Apple’s Podcast app (specifically for the Siri integration) so it’s not an issue for me.

Voice First: Tasks and Music
The idea of voice first computing has caught on among some in the tech community who are certain that it is the future. I certainly have doubts. Even assuming perfect hardware that always hears perfectly and parses natural language requests perfectly (we’re not there yet) I certainly have problems with the cognitive load of voice computing. I’ll allow that it might just be a question of retraining our minds for awhile. It’s probably also a process of figuring out which things are better suited for voice. Certain tasks are super easy and tend to work with Siri via whatever device. This is the list of usual things people are doing because they require very little thinking: setting timers, alarms, reminders, controlling devices, getting the weather.

But let’s talk about HomePod and Siri as a “musicologist” for a moment. An interesting thing about playing music, at least for me, is I often don’t know what it is I want to play. When I was a kid I had a crate of records and a box of cassette tapes. I could easily rattle off 10 to 20 of my current favorites. Overtime it changed and the list grew. But it was always a list I could easily remember. Enter iTunes and eventually Apple Music. My music library has grown by leaps and bounds. My old favorites are still there but they are now surrounded by a seemingly endless stream of possibility. In a very strange way, choosing music is now kind of difficult because it’s overwhelming. On the one hand I absolutely love discovering new music. I’m listening to music I never would have known of were it not for Apple Music. I’ve discovered I actually like certain kinds of jazz. I’m listening to an amazing variety of ambient and electronic music. Through playlists I’ve discovered all sorts of things. But if I don’t have a screen in front of me the chances of remembering much of it is nil. If I’m lucky I might remember the name of a playlist but even that is difficult as there are so many being offered up.

So while music on the HomePod sounds fantastic when it’s playing I often have these moments of “what next?” And in those moments my mind is often blank and I need a screen to see what’s possible. I’m really curious to know how other people who are using voice only music devices decide what they want to play next.

Conclusion
There isn’t one. This is the kind of device that I want to have. I’m glad I have it. I enjoy it immensely. It is a superb experience until it isn’t which is when I want to throw it out a window. Hey Apple, thanks?


  1. Well, sometimes a yell is actually required. ↩︎

Using an iPad to maintain websites – my workflow

A couple weeks ago I wrote about my website managment workflow changing up a bit due to Panic’s recent announcement that they were discontinuing Transmit. To summarize, yes, Transmit will continue to work for the time being and Panic has stated that it will continue developing Coda for iOS. But they’ve been slow to adopt new iOS features such as drag one drop while plenty of others are already offering that support. So, I’ve been checking out my options.

After two weeks with the new workflow on the iPad I can say this was a great decision and I no longer consider it tentative or experimental. This is going to stick and I’m pretty excited about it. I’ve moved Coda off my dock and into a folder. In it’s place are Textastic and FileBrowser. Not only is this going to work, it’s going to be much better than I expected. Here’s why.

iCloud Storage, FTP, Two Pane View
Textastic allows for my “local” file storage to be in iCloud. So, unlike Coda, my files are now synced between all devices. Next, Textastic’s built in ftp is excellent. And I get the two pane file browser I’ve gotten used to with Transmit and Coda. Local files on the left, server files on the right. The html editor is excellent and is, for the most part, more responsive than Coda. Also, and this is really nice as it saves me from extra tapping, uploading right from a standard share button within the edit window. Coda requires switching out of the edit window to upload changes.

Drag and Drop
Unlike Transmit and Coda, the developers of FileBrowser have implemented excellent drag and drop support. I’ve set-up ftp servers in FileBrowser and now it’s a simple action to select multiple files from practically anywhere and drag them right into my server. Or, just as easily, because I’ve got all of my website projects stored in iCloud I can drag and drop from anywhere right into the appropriate project folder in the Files app then use the ftp server in Textastic to upload. Either way works great. Coda/Transmit do not support drag and drop between apps and are a closed silo. The new workflow is now much more open and with less friction.

Image Display and Editing
One benefit of FileBrowser is the display of images. In the file view thumbnails on the remote server are nicely displayed. If I need to browse through a folder of images at a much larger view I can do that too as it has a full screen image display that allows for swiping through. Fantastic and not something offered by Transmit or Coda. Also, from a list view of either Files or FileBrowser, local or remote, I can easily drag and drop an image to import into Affinity Photo for editing. Or, from the list view, I can select the photo to share/copy to Affinity Photo (or any image editor).

Textastic and Files
This was another pleasant surprise. While I’ll often get into editing mode and just work from an app, in this case Textastic, every so often I might come at the task from another app. Say, for example, I’ve gotten a new images emailed from a client as happened today. I opened Files into split view with Mail. In two taps I had the project folder open in Files. A simple drag and drop and my images were in the folder they needed to go to. The client also had text in the body of the email for an update to one of his pages. I copied it then tapped the html file in Files which opened the file right up in Textastic. I made the change. Then uploaded the images and html files right from Textastic.

Problems?
Thus far I’ve encountered only one oddity with this new workflow and it has to do with this last point of editing Textastic files by selecting them from within the Files app. As far as I can tell, this is not creating a new copy or anything, it is editing the file in place within Textastic. But for any file I’ve accessed via Files it shows a slight variation in the recents file list within Textastic. Same file, but the app seems to be treating it as a different file and it shows up twice in the recent files list. Weird. It is just one file though and my changes are intact regardless of how I’m opening it. As a user it seems like a bug but it may just be “the way it works”.

Using HomeKit

Smart Plugs
Last spring I finally purchased my first smart plug, a Homekit compatible plug from KooGeek. It worked. I bought a second. A few weeks later the local Walmart had the isp6 HomeKit compatible plugs from iHome on sale. Only $15. I bought three. My plan was to use these with lights and to have one for my A/C in the summer to be swapped out to the heater in my well-house in the winter. I’m pretty stingy in my use of energy so in the winter I make it a point to keep that heater off and only turn it on when when I must which requires a good bit of effort on my part. I don’t mind the walking out to the well house as I can always use the steps but it’s the mental tracking of it and the occasional forgetting that is bothersome. Having a smart plug makes it convenient to power it on and off but I’m still having to remember to keep tabs.

Automations
Enter automations. The Home app gets better with each new version. By using automations it is now possible to automate a scene or a device or multiple devices at specific times or sunset/sunrise or a set time after sunset/sunrise or before. Very handy for a morning light but not too helpful for my well-house heater. But wait, I can also set-up an automation for a plug based on a Homekit sensor such as the iHome 5-in-1 Smart Monitor. I put the monitor in the well-house and create an automation to turn on the heater if the temperature dips to 32. I’ve turned my not-so-smart heater into a smarter one which will keep my water from freezing with no effort from myself. Even better, it will reduce my electricity use because of it’s accuracy.

I have a similar dumb heater in my tiny house as well as a window A/C. I might use the same monitor to more accurately control heating and cooling in here. Currently I do that with constant futzing with controls and looking at a simple analog thermometer. It would be an improvement to just have a set temperature to trigger devices.

Lights
I’ve been avoiding purchasing Homekit compatible lights because most, such as those from Phillips, also required purchase of a hub. Also, cost was a bit much. My reasoning being that if I just pick-up smart plugs as they are on sale I can use those for lights or anything else. Cheaper and more versatile. That said, one benefit of the lights is that they can be dimmed which is appealing. So, two weeks ago I picked up one of Sylvania’s Smart bulbs. It works perfectly. I’ll likely get another but in my tiny house I don’t need that many lights so two dimmable bulbs will likely be enough. It’s very nice to be able to ask Siri to set the lights at 40% or 20% or whatever. I have an automation that kicks on the light to 15% at my wake-up time. Very nice to wake up to a very low, soft light. With a simple request I can then ask Siri to raise the brightness when I’m actually ready to get out of bed.

Lighting Automations
An hour after sunrise I’ve got another set of LEDs that kick on for all of my houseplants that sit on two shelves by the windows. An hour after sunset those lights go off and at the same time the dimmable light comes on at 50%.

Home
All of the set-up happens via the built in iOS Home app. It’s a fairly easy to use app that gets better with each new version of iOS. My set-up is pretty simple but Home is designed to scale up with larger homes with more rooms and devices. In my case, I’ve go the Home app split screen with Apple Music on my iPad Air 2. It’s on a shelf within easy reach of my usual sitting spot on my futon/bed. While I do most interaction via Siri or automation it’s nice to have easy visual access. Especially handy for monitoring the well house heater and temperature. Having Music open and ready to play to a speaker via AirPlay is very nice.

AppleTV as Hub
Of course, to really make this work a hub is required. A recent iPad running iOS 10 or one of the newer AppleTVs will work. I’m using the AppleTV because I’ve always got one on. Set-up was easy and I’ve never had to futz with it. The nice thing about this set-up is that I can access my Homekit devices from anywhere. Whether I’m in town or visiting family or out for a walk, checking or changing devices is just a couple taps or request from Siri.

HomePod
Last is the device that has not arrived yet. My HomePod is set to arrive February 9. I don’t need it for any of this to work but I suspect it will be a nice addition. Controlling things with Hey Siri has always worked pretty well for me though I suspect it will be even better with HomePod. Will find out soon.

Siri and voice first

In a recent episode of his Vector podcast, Rene Ritchie had “voice first” advocate Brian Roemmele. Rene is probably my current favorite Apple blogger and podcaster and Vector is excellent.

As I listened to this episode I found myself nodding along for much of it. Roemmele is very passionate about voice first computing and certainly seems to know what he’s talking about. In regards to Siri, his primary argument seems to be that Apple made a mistake in holding Siri back after purchasing the company. At the time Siri was an app and had many capabilities that it no longer has. Rather than take Siri further and developing it into a full-fledged platform Apple reigned it in and took a more conservative approach. In the past couple of years it has been adding back in, via Siri Kit, what it calls domains.

Apps adopt SiriKit by building an extension that communicates with Siri, even when your app isn’t running. The extension registers with specific domains and intents that it can handle. For example, a messaging app would likely register to support the Messages domain, and the intent to send a message. Siri handles all of the user interaction, including the voice and natural language recognition, and works with your extension to get information and handle user requests.

So, they scaled it back and are rebuilding it. I’m not a developer but my understanding of why they’ve done this is, in part, to allow for a more varied and natural use of language. But as with all things internet and human, people often don’t want to be bothered with the details. They want what they want and they want it yesterday. In contrast to Apple’s handling of Siri we have Amazon which has it’s pedal to the floor.

Roemmele goes on to discuss the rapid emergence of Amazon’s Echo ecosystem and the growth of Alexa. Within the context of this podcast and what I’ve seen of his writing, much of his interest and background seems centered on commerce and payment as they relate to voice. That said, I’m just not that interested in what he calls “voice commerce”. I order from Amazon maybe 6 times a year. Now and in the foreseeable future I get most of what I need from local merchants. That said, even when I do order online I do so visually. I would never order via voice because I have to look at details. Perhaps I would use voice to reorder certain items that need to be replaced such as toilet paper or toothpaste but that’s the extent of it.

What I’m interested in is how voice can be a part of the computing experience. There are those of us that use our computers for work. For the foreseeable future I see myself interacting with my iPad visually because I can’t update a website with my voice. I can’t design a brochure with my voice. I can’t update a spreadsheet with my voice. I can’t even write with my voice because my brain has been trained to write as I read on the screen what it is I’m writing.

But this isn’t the computing Roemmele is discussing. His focus is “voice first devices”, those that don’t even have screens, devices such as the Echo and the upcoming HomePod1. And the tasks he’s suggesting will be done by voice first computing are different. And this is where it get’s a bit murky.

Right now my use of Siri is via the iPhone, iPad, AppleWatch and AirPods. In the near future I’ll have Siri in the HomePod. How do I make the most of voice first computing? What are these tasks that Siri will be able to do for me and why is Roemmele so excited about voice first computing. The obvious stuff would be the sorts of things assistants such as Siri have been touted as being great for: asking about for the weather, adding things to reminders, setting alarms, getting the scores for our favorite sports ball teams and so on. I and many others have written about these sorts of things that Siri has been doing for several years now. But what about the less obvious capabilities?

At one point in the podcast the two discuss using voice for such things as sending text. I often use dictation when I’m walking to dictate a text into my phone when using Messages and I see the benefit of that. But dictation, whether it is dictating to Siri or directly to the Messages app or any other app, at least for me, requires an almost different kind of thinking. It may be that I am alone in this. But it is easier for me to write with my fingers on the keyboard then it is to “write” with my mouth through dictation. It might also be that this is just a matter of retraining my brain. I can see myself dictating basic notes and ideas. But I don’t see myself “writing” via dictation.

At another point Roemmele suggests that apps and devices will eventually disappear as they are replaced by voice. At this point I really have to draw a line. I think this is someone passionate about voice first going off the rails. I think he’s let his excitement cloud his thinking. Holding devices, looking, touching, swiping, typing and reading, these are not going away. He seems to want it both ways though at various points he acknowledges that voice first doesn’t replace apps so much as it is a shift in which voice becomes more important. That I can agree with. I think we’re already there.

Two last points. First, about the tech pundits. Too often people let their own agenda and preference color their predictions and analysis. The lines blur between their hopes and preferences and what is. No one knows the future but too often act as they do. It’s kinda silly.

Second, what seems to be happening with voice computing is simply that a new interface has suddenly become useful and it absolutely seems like magic. For those of us who are science fiction fans it’s a sweet taste of the future in the here and now. But, realistically, its usefulness is currently limited to very fairly trivial daily tasks mentioned above. Useful, convenient and delightful? Yes, absolutely. Two years ago I had to go through all the trouble of putting my finger on a switch, push a button or pull a little chain, now I can simply issue a verbal command. No more trudging through the effort of tapping the weather app icon on my screen, not for me. Think of all the calories I’ll save. I kid, I kid.

But really, as nice an addition as voice is, the vast majority of my time computing will continue to be with a screen. I don’t doubt that voice interactions will become more useful as the underlying foundation improves and I look forward to the improvements. As I’ve written many times, I love Siri and use it every day. I’m just suggesting that in the real world, adoption of the voice interface will be slower and less far reaching than many would like.


  1. Actually, technically, the HomePod technically has a screen but it’s not a screen in the sense that an iPhone has a screen. ↩︎

Panic, Transmit and Keeping My Options Open

I’ve been coding websites for the web since 1999 and doing it for clients since 2002. I started using Coda for Mac when the first version came out and when Transmit and Coda became available for iOS I purchased both. When I transitioned to the iPad as my primary computer in 2016 those two apps became the most important on my iPad. But no more.

A couple weeks ago Panic announced that they would no longer be developing Transmit for iOS. They’d hinted in a blog post a year or two ago that iOS development was shaky for them. They say though that Coda for iOS will continue. But I’m going to start trying alternative workflows. In fact, I’ve already put one in place and will be using it for the foreseeable future. Why do this if Coda still works and has stated support for the future?

I’m not an app developer. I’m also not an insider at Panic. But as a user, I find it frustrating that we are over three full months since the release of iOS 11 and seven months since WWDC and Panic’s apps still do not support drag and drop in iOS 11. Plenty of other apps that I use do. I find myself a bit irritated that Panic occupies this pedestal in the Apple nerd community. It’s true that their apps are visually appealing. Great. I agree. But how’s about we add support for important functionality? I really love Coda and Transmit but I just don’t feel the same about Panic as a company. Sometimes it seems like they’ve got plenty of time and resources for whimsy (see their blog for posts about their sign and fake photo company) and that’s great I guess. I guess as a user that depends on their apps I’d rather they focus on the apps. I’m on the outside looking in and it’s their company to do as they please. But as a user I’ll have an opinion based on the information I have. And though they’ve said Coda for iOS will continue, it’s time to test other options.

I’ve been using FileBrowser for three years just as a way to access local files on my MacMini. I’d not thought much about how it might be used as my FTP client for website management in conjunction with Apple’s new Files app. Thanks to Federico’s recent article on FTP clients I was reminded that FileBrowser is actually a very capable ftp app. So, I set-up a couple of my ftp accounts. With this set-up I can easily access my servers on one side of my split screen via FileBrowser and my “local” iCloud site folders in Files on the other side. I really like the feel of it. The Files app is pretty fantastic and being able to rely on that in this set-up is a big plus. It feels more open which brings me to the next essential element in this process: editing html files.

One of my frustrations with Coda and Transmit was that my “local” files were stuck in a shared Coda/Transmit silo. Nice that they were interchangeable between the two but I could not locate them in DropBox or iCloud. With this new set-up I needed a text editor that could work from iCloud as a local file storage. I’ve got two options that I’m starting with, both have built in ftp as well as iCloud as a file storage option. Textastic is my current favorite. Another is GoCoEdit. Both have built in preview or the option to use Safari as a live preview. So, as of now, I open my coding/preview space and use a split between Textastic and Safari. I haven’t used Textastic enough to have a real opinion about how it feels as an editor when compared to Coda’s editor. But thus far it feels pretty good. My initial impression is that navigation within documents is a bit snappier and jumping between documents using the sidebar is as fast as Coda’s top tabs.

So, essentially, this workflow is relying on four apps in split screen mode in two spaces. One space is for file transfer, the other is for coding/previewing. Command Tab gets me quickly back and forth between them. I often get instructions for changes via email or Messages. Same for files such as pdfs and images. In those cases it is easy enough to open Mail or Messages as a third slide over app that I can refer to as I edit or for drag and drop into Files/FileBrowser.

It’s only been a few days with this new 4 app workflow but in the time I’ve used it I like it a lot. I get drag and drop and synched iCloud files (which also means back-up files thanks to the Mac and Time Machine).

Hey Siri, give me the news

Ah, it’s just a little thing but it’s a little thing I’ve really wanted since learning of a similar feature on Alexa. In fact, I just mentioned it in yesterday’s post. We knew this was coming with HomePod and now it’s here for the iPad and iPhone too. Just ask Siri to give you the news and she’ll respond by playing a very brief NPR news podcast. It’s perfect, exactly what I was hoping for. I’ve already made it a habit in the morning, then around lunch and again in the evening.

Alexa Hype

A couple years ago a good friend got one of the first Alexa’s available. I was super excited for them but I held off because I already had Siri. I figured Apple would eventually introduce their own stationary speaker and I’d be fine til then. But as a big fan of Star Trek and Sci-fi generally, I love the idea of always present voice-based assistants that seem to live in the air around us.

I think he and his wife still use their Echo everyday in the ways I’ve seen mentioned elsewhere: playing music, getting the news, setting timers or alarms, checking the weather, controlling lights, checking the time, and shopping from Amazon. From what I gather that is a pretty typical usage for Echo and Google Home owners. That list also fits very well with how I and many people are using Siri. With the exception of getting a news briefing which is not yet a feature. As a Siri user I do all of those things except shop at Amazon.

The tech media has recently gone crazy over the pervasiveness of Alexa at the 2018 CES and the notable absence of Siri and Apple. Ah yes, Apple missed the boat. Siri is practically dead in the water or at least trying to catch-up. It’s a theme that’s been repeated for the past couple years. And really, it’s just silly.

Take this recent story from The Verge reporting on research from NPR and Edison Research

One in six US adults (or around 39 million people) now own a voice-activated smart speaker, according to research from NPR and Edison Research. The Smart Audio Report claims that uptake of these devices over the last three years is “outpacing the adoption rates of smartphones and tablets.” Users spent time using speakers to find restaurants and businesses, playing games, setting timers and alarms, controlling smart home devices, sending messages, ordering food, and listening to music and books.

Apple iOS devices with Siri are all over the planet rather than just the three or four countries the Echo is available in. Look, I think it’s great that the Echo exists for people that want to use it. But the tech press needs to pull it’s collective head out of Alexa’s ass and find the larger context and a balance in how it discusses digital assistants.

Here’s another bit from the above article and research:

The survey of just under 2,000 individuals found that the time people spend using their smart speaker replaces time spent with other devices including the radio, smart phone, TV, tablet, computer, and publications like magazines. Over half of respondents also said they use smart speakers even more after the first month of owning one. Around 66 percent of users said they use their speaker to entertain friends and family, mostly to play music but also to ask general questions and check the weather.

I can certainly see how a smart speaker is replacing radio as 39% reported in the survey. But to put the rest in context, it seems highly doubtful that people are replacing the other listed sources with a smart speaker. Imagine a scenario where people have their Echo playing music or a news briefing. Are we to believe that they are sitting on a couch staring at a wall while doing so? Doing nothing else? No. The question in the survey: “Is the time you spend using your Smart Speaker replacing any time you used to spend with…?”

So, realistically, the smart speaker replaces other audio devices such as radio but that’s it. People aren’t using it to replace anything else in that list. An Echo, by it’s very nature, can’t replace things which are primarily visual. As fantastic as Alexa is for those that have access to it, for most users it still largely comes down to that handful of uses listed above. In fact, in another recent article on smart speakers, The New York Times throws a bit of cold water on the frenzied excitement: Alexa, We’re Still Trying to Figure Out What to Do With You

The challenge isn’t finding these digitized helpers, it is finding people who use them to do much more than they could with the old clock/radio in the bedroom.

A management consulting firm recently looked at heavy users of virtual assistants, defined as people who use one more than three times a day. The firm, called Activate, found that the majority of these users turned to virtual assistants to play music, get the weather, set a timer or ask questions.

Activate also found that the majority of Alexa users had never used more than the basic apps that come with the device, although Amazon said its data suggested that four out of five registered Alexa customers have used at least one of the more than 30,000 “skills” — third-party apps that tap into Alexa’s voice controls to accomplish tasks — it makes available.

Now, back to all the CES related news of the embedding of Alexa in new devices and/or compatibility. I’ve not followed it too closely but I’m curious about how this will actually play out. First, of course, there’s the question of which of these products actually eventually make it to market. CES announcements are notorious for being just announcements for products that never ship or don’t ship for years into the future. But regardless, assuming many of them do, I’m just not sure how it all plays out.

I’m imagining a house full of devices many of which have microphones and Alexa embedded in them. How will that actually work? Is the idea to have Alexa, as an agent that listens and responds as she currently does in a speaker, but also in all of the devices be they toilets, mirrors, refrigerators… If so, that seems like overkill and unnecessary costs. Why not just the smart speaker hub that then intelligently connects to devices? Why pay extra for a fridge with a microphone if I have another listening device 10 feet away? This begins to seem a bit comical.

Don’t get me wrong, I do see the value of increasing the capabilities of our devices. I live in rural Missouri and have a well house heater 150 feet away from my tiny house. I now have it attached to a smart plug and it’s a great convenience to be able to ask Siri to turn it off and on when the weather is constantly popping above freezing only to drop below freezing 8 hours later. It’s also very nice to be able to control lights and other appliances with my voice, all through a common voice interface.

But back to CES, the tech press and the popular narrative that Alexa has it all and that Siri is missing out, I just don’t see it. A smart assistant, regardless of the device it lives in, exists to allow us to issue a command or request, and have something done for us. I don’t yet have Apple’s HomePod because it’s not available. But as it is now, I have a watch, an iPhone and two iPads which can be activated via “Hey Siri”. I do this in my home many times a day. I also do it when I’m out walking my dogs. Or when I’m driving or visiting friends or family. I can do it from a store or anywhere I have internet. If we’re going to argue about who is missing out, the Echo and Alexa are stuck at home while Siri continues to work anywhere I go.

So, to summarize, yes, stationary speakers are great in that their far-field microphones work very well to perform a currently limited series of tasks which are also possible with the near-field mics found in iPhones, iPads, AirPods and the AppleWatch. The benefit of the stationary devices are accurate responses when spoken to from anywhere in a room. A whole family can address an Echo whereas only individuals can address Siri in their personal devices and have to be near their phone to do so. Or in the case of wearables such as AirPods or AppleWatch, they have to be on person. By contrast, these stationary devices are useless when we are away from the home when we have mobile devices that still work.

My thought is simply this, contrary to the chorus of the bandwagon, all of these devices are useful in various ways and in various contexts. We don’t have to pick a winner. We don’t have to have a loser. Use the ecosystem(s) that works best for you If it’s Apple and Amazon enjoy them both and use the devices in the scenarios where they work best. If it’s Amazon and Google, do the same. Maybe it’s all three. Again, these are all tools, many of which compliment each other. Enough with the narrow, limiting thinking that we have to rush to the pronouncement of a winner.

Personally, I’m already deeply invested in the Apple ecosystem and I’m not a frequent Amazon customer so I’ve never had a Prime membership. I’m on a limited budget so I’ve been content to stick with Siri on my various mobile devices and wait for the HomePod. But if I were a Prime member I would have purchased an Echo because it would have made sense for me. When the HomePod ships I’ll be first in line. I see the value of a great sounding speaker with more accurate microphones that will give me an even better Siri experience. I won’t be able to order Amazon products with the HomePod but I will have a speaker with fantastic audio playback and Siri which is a trade off I’m willing to make.

Brydge Keyboard Update

It’s been almost two months of using the Brydge keyboard. It seems to be holding up very well in that short time. The only defect I’ve discovered is the right most edge of the space bar does not work. My thumb has to be at least a half inch over to activate a press. Not a deal breaker but it is something I’ve had to adjust.

Also, something positive that I’ve discovered. The Brydge hinges rotate all the way to a parallel position with the keyboard. In other words, the iPad rotates all the way to no angle at all, it just sort of opens all the way, level with the keyboard. I initially thought this would be useless. Why would I ever want to do this?

As it turns out, it does indeed come in very handy. When I’m lounging on the futon to read I can put the iPad in this position and let the keyboard, resting in my lap or on a pillow next to me, serve as a stand to elevate the iPad to eye level. I don’t have to look down towards my lap as one does with a standard laptop. Instead, the iPad seemingly floats in front of my face. It’s actually kind of fantastic and a very comfortable position for reading. And interestingly, it balances perfectly. I barely have to hold the iPad or the keyboard. It’s kinda weird actually. I just lightly grasp the pair right above the keyboard and use my thumb to scroll. I can also easily shift my right hand down to the arrow keys to scroll via keyboard while browsing or reading. If I need to do some real typing the motion to fold the two into a normal laptop position is fluid and natural, taking less than a second. No doubt this has been a very nice surprise feature.

One Year with Apple AirPods

It was a year ago that began selling the AirPods and they were sold out instantly. In fact, it was difficult to get them for months as Apple struggled to keep up with demand. Production finally caught in mid-summer up only to fall behind in recent weeks as holiday demand surged. I ordered mine within minutes of them going on sale so was lucky enough to get in on the first shipment. I’ve worn them many times a day every day since they arrived.

It’s been said by many over the past year that the AirPods were their favorite Apple product in recent memory. There’s no doubt, they are a delight to use. For anyone that enjoys music or podcasts on the go, especially those with an iPhone or Apple Watch, these are well worth the cost.

A few highlights:

  • They stay in my ears very well and many report the same thing. Even if the fit is not perfect, because there is no wire tugging, they tend to stay put.
  • The batteries last 3-4 hours and recharge very quickly in the case which lasts for 3-4 days.
  • Siri works fantastically.
  • Phone calls are great. The mic does a great job of cancelling out background noise providing clean audio for the person I’m talking to.
  • With the occasional oddball exception, they pair up quickly with whatever device I’m trying to use. Usually iPhone or AppleWatch, sometimes an iPad.
  • I’m often streaming music from my iPhone to the AppleTV. When I head out for a walk I pop the AirPods in and the music switches to them with no action from me. That’s the kind of magic that makes me smile.
  • They rarely drop the connection and have a pretty fantastic range. I often step outside my tiny house, forgetting the phone inside (sometimes leaving it deliberately) and can take care of little tasks such as refilling bird feeders, watering plants on the deck, etc. A 30 foot range is pretty typical. At about 40 feet they start to drop a bit.
  • I use them a lot with Siri to control audio especially in the winter when my phone is in a pocket, I’m wearing gloves and the watch is under layers of clothing. I often tap through hats and hoods to activate Siri and it works great to change artist, repeat a song, skip forward, etc. Same to answer or initiate a call.
  • I have not lost them. They are in my ears or in the case. The case is on a shelf (they have their spot) or in my pocket. Basically I treat them the same way I treat other little things such as my keys.