This is a transcript. For the video, see How Ask GeorgiaGov's content speaks for itself - a chat with Preston So.

[00:00:00] Michael Meyers: Hello, and welcome to Tag1TeamTalks, the podcast and blog of Tag1 Consulting. Today, we have a really special episode. I'll be talking with Preston So, Tag1's editor in chief about his awesome new book, Voice Content and Usability published by A Book Apart.

[00:00:16] I'm Michael Meyers, the managing director at Tag1. And like I said, I'm super excited about today's episode. Preston is one of the leading subject matter experts in voice content. This book is the first book ever written on the topic. If you create content, if you're interested in voice communication, you're going to love today's episode.

[00:00:34] This is part two. If you haven't already please check out part one where Preston gave us an overview of voice content. We talked about content strategy, information, architecture, usability and the future of voice content. We covered a lot. This is part two. We're going to be talking about

[00:00:52] We're going to do a mini case study on the first voice interface that was built for the residents of Georgia Preston. Congratulations on the book. Welcome back, and really appreciate your time. What is the project?

[00:01:07] Preston So: It's a really great question.

[00:01:08] You know, I think one of the things that I want to start off with is by sharing that ask Georgia gov is the case study that underpins and undergirds the entire book that I wrote Voice Content and Usability, available from A Book Apart right now. And one of the things that's really interesting about it is the fact that it's one of the first ever content driven or informational voice interfaces that really exists in the Alexa ecosystem.

[00:01:30] And it's also the first separate voice interface built for residents of the state of Georgia. One of the most interesting features of and the interesting - one of the most interesting sort of aspects of Ask Georgiagov is that it's part of Georgia's whole effort to become more accessible for citizens, for residents of of Georgia in terms of folks who might not have a computer at home might have trouble with navigating a website because they might not necessarily be as tech savvy or they might be disabled.

[00:02:02] And this project was a really interesting experiment and a proof of concept, a pilot that really demonstrated the potential of voice interfaces and especially it's assistive potential for folks who really are interested in accessibility and some of these topics that are very near and dear to a lot of our hearts.

[00:02:20] So Ask Georgiagov is basically an Alexa skill that allows for anyone to ask about questions related to state government. Sothree of the most popular topics were essentially asking about things like registering a vehicle or state sales tax, or how to renew a driver's license. And a lot of these really common questions that sometimes are a little bit hard to find on these government websites and might involve calling an agency and might be something that people feel a whole lot more comfortable asking about in the comfort of their own home to their Amazon Alexa, as opposed to let's say, going on the web site or driving down to a County office

[00:03:03] Michael Meyers: So, much more accessible and potentially much more efficient. You mentioned that it was an Alexa skill. Could you tell us a little bit more about how it works? You know, some of the technology involved, you know, what's on the backend, you know you know, how does that skill work.

[00:03:15] Preston So: Sure. And so the skill was built back in mid 2016 to 2017. So this is actually a fairly old voice interface when it comes to the ways that voice interfaces work today.

[00:03:26] Alexa had fairly recently before that in 2015 or so announced the Alexa skills kit and launched the Alexa skills kit, which is basically,for those who are familiar with the Alexa ecosystem. It's actually the very You know, the primary way that you create Alexa skills or applications that are installed onto a voice interface like Alexa and one of the things that we did there is and I have a whole article about this that talks about how we actually made these integrations happen with my colleague, Chris hamper in Acquia labs.

[00:03:56] We actually connected Alexa to the website, which is a Drupal website and that involved using the Alexa Drupal module originally built by Jacob Suchy. And it actually was one of those interesting projects that involve this marriage between the Drupal CMS and Amazon Alexa, two technologies that really don't play in the same sandbox when it begins to when it, when it, when it comes to actually, you know, putting together these integrations and, and, and, you know, synthesizing a lot of these disparate technologies But Alexa is a really interesting ecosystem because they were really the first at Amazon to kind of say, Hey, we really want people to build on this and build whatever they want on this.

[00:04:36] And one of the things that we noticed at the very beginning is that a lot of these Alexa skills were predominantly transactional or task led. To use a quote from Amir Shevat, the author of Designing Chatbots (ed: Designing Bots: Creating Conversational Experiences), task led or transactional means that these bots were primarily for the purposes of helping you with certain tasks, like ordering a pizza or checking your credit card balance, or booking a hotel room or a flight, but they weren't so much amenable to this other category of ideas, namely informational or content driven or topic led conversational interactions, namely let's discuss the new movie Quella or let's talk about some of these new shows that are coming out on Netflix, or what are some of the things that I should be aware of as I get vaccinated and want to begin to return to some, some form of a normal life.

[00:05:26] A lot of those kinds of voice interfaces weren't really around back in the days of 2016, 2017. And so a lot of the challenge of this and this implementation was how do we actually build a first ever, or one of the first ever inaugural content driven voice interfaces. That's predominant, that's primarily about delivering content as opposed to doing something on the user's behalf.

[00:05:49] So we hooked up Amazon Alexa to the Drupal site using the Drupal Alexa module. And we did a whole bunch of really interesting custom work on both ends of the equation on the Alexa side and on the Drupal side to really make this work so that it would be really nice and convenient for the Georgia folks who were working on this content, as well as for those who are on the user side, interacting with the Alexa interface itself.

[00:06:14] Michael Meyers: Now you mentioned that you're pretty early in voice interfaces, you know, really limited integration into CMS is what are some of the challenges that that presented?

[00:06:27] Preston So: One of the biggest challenges I think is that the world of content management, as I've written several times in CMS wire, and I've written a lot about this topic, especially recently, is that the CMS, you know, the content management system has been really rooted in the web for a long time. And that means that a lot of its approaches, a lot of its paradigms, a lot of its features are really oriented toward the web. One example of this that I'll just mention very briefly is, well, how do you preview a piece of content that you're writing in a voice interface, right?

[00:06:57] It's not such an easy thing to do. And one of the things that I think is really challenging about the CMS world is that a lot of this functionality that makes it really easy to connect up content with a voice interface simply isn't there. Luckily we had the ability to stay on the shoulders of a lot of the pre-existing open source work that was available in the Drupal ecosystem, namely the Alexa module.

[00:07:20]But I think for a lot of content management systems, it's still a big question mark. A lot of folks still have to rely on developers who are very well versed in both the CMS technology and the voice technology that you're integrating with in order to make these things happen. The biggest challenge, I think that we face though, in terms of the development process was debugging and troubleshooting because one of the things that happens when you integrate as you know, folks at Tag1 know very well, when you integrate across a bunch of different systems that all are communicating with each other and potentially, you know, don't have, let's say a really clear single output when it comes to some of these errors or some of these problems that surface is that troubleshooting where exactly an issue surfaced or debugging exactly where something might've gone wrong is a very difficult proposition. And it involves unearthing certain things that might not necessarily be the responsibility of the CMS or the responsibility of the voice interface.

[00:08:15] So you know, hats off to my colleague at the time Chris Hamper, who was really the, the kind of architect and the you know, the engineer who put together a lot of these pieces and figured out a lot of other ways to debug and troubleshoot a lot of these issues that surfaced. But one of the things that I'll say is that every implementation is different. Even today with these cross-platform frameworks that are available, these cross-platform tools that have now emerged like bots society into dialogue flow, there still a lot of really tough challenges. And I think it's still a world where you know, you really have to be prepared for a lot of uncertainty and a lot of risk.

[00:08:49] One thing that I would not say for example, is that voice content is a realm that you can just jump into thinking that it's going to be just like a traditional website implementation because it's really not that easy. And the technologies really don't communicate with each other quite as well as you might expect quite yet.

[00:09:04] Michael Meyers: Yeah. I mean, frankly, it's amazing how long CMS has have struggled with content preview systems and, you know, just for web content, you know, to, you know, it's, I can't imagine where it's at for for voice content, you know, and, and thinking a lot about how you deploy and test. That's really interesting, you know, how dev ops plays into this.

[00:09:22] How did you guys measure success for this project? You know, what, what, you know, what were your goals and creating this, like you said, reaching out to, to more Georgians, were you able to, to measure that and look back and say, you know, we, we, we hit those goals.

[00:09:37] Preston So: Yeah, so really interesting question. And I think it's one that a lot of CMOs and a lot of marketing folks and CTOs are really curious about.

[00:09:44] Obviously benchmarking is really important. Metrics are very important. Logging and analytics are very important. All of those are things I talk about in my book Voice Content and Usability, I think you know, one of the really interesting things about Georgia was they were willing to work with us in terms of saying, Hey, you know, this is kind of a first foray into conversational interfaces.

[00:10:01] It was the first conversational interface that Georgia had built or worked on. And but we were, you know, we had certain expectations and certain needs that we wanted to meet. One of the goals after all, one of the missions of this entire voice interface was to make it easier for a lot of Georgians who might not have access to a computer to be able to really navigate this voice content in ways that made sense to them.

[00:10:22] So you know, we did a couple of different things. The first is that we provided well, we did a full round of usability testing, several rounds of usability testing, as a matter of fact, and we also provided a really ample set of logs and analytics and reports for the folks at Georgia to be able to really introspect to the results and see what they were looking at.

[00:10:40] And I think there are certain success criteria that you see emerge from this. Right? One of the things that we did for the team at Georgia was in order for them to be able to cross compare, right? Cause I think this is a really important consideration for a lot of folks is when you're starting to deal with multichannel or omnichannel content, where you have to start to measure the performance of content across multiple touchpoints you really want to have that sort of sense of, okay.

[00:11:04] How was the content performing on the website versus how is the content performing on the voice interface? And we did all of that work for Georgia by implementing a logging mechanism that allowed for us to not only collect the search results in the search queries that were being issued on the website, but also those that were being issued through Alexa.

[00:11:20] The results that came about from that were frankly, very interesting, because what we actually found is that the search queries that people were using on the website were very different and completely distinct from the sorts of search queries that we were finding on the Alexa device.

[00:11:37] And that's actually a bit of a success in itself, right. Because you'd never did, you know, you don't necessarily want people to be searching for the same things per se, because that indicates that, you know, potentially you've got some kind of mismatch in terms of the performance or how you've written the content or how the content is structured.

[00:11:52] So what we found on the Alexa devices, that vehicle registration driver's licenses and state sales tax for the three most popular topics on the voice interface Ask Georgiagov. But one of the things that was really important to us as well, was that the usability and the ability for people to use this interface was very good.

[00:12:11] Of course, one of the issues back in those days is that a lot of Alexa skills that were custom built didn't really perform very well. And a lot of that was chalked up to the restrictiveness of how to build these Alexa interfaces, as well as some of the issues that Amazon Alexa had at the time in terms of interpreting certain speech.

[00:12:28] But you know, there were a couple of pieces of data that we found that were really interesting. The first is that, you know, nearly 80% of people, which is quite high for an Alexa interface back in 2017 in that range about 80% of people actually got to the content they were looking for. And about 70% of those interactions also resulted in actually acquiring a phone number that they could call for more information, which is another part of the voice interface that we wanted to enable because a lot of folks might want to call somebody afterwards and ask more questions about what they're actually looking at.

[00:12:59] One of the things that was really interesting though, and this is a story that I share in the book as well, Voice Content and Usability is that oftentimes the issues that come about and measuring performance, isn't something that is the fault of the design of the interface or the fault of let's say the usability testing itself or the performance of the interface.

[00:13:17] And it might actually be a deeper issue. One examples of this is that, you know, we, we did this logging, we did these reports that were in Drupal that you could find, and alongside the web searches, you would see basically the search queries that people would input it into Alexa. And we found this one search query that kept on popping up over and over again, that was showing up in the logs as Lawsons, as in the name, Lawson apostrophe, S L a w S O N apostrophe S.

[00:13:44] And we were like, what is this? Like, what, what, who, who is searching for this really random, proper now, or this name? In a voice interface. That's about Georgia, right? It's a voice interface. It's about Georgia state government. And we like racked our brains and it was this whole meeting, this whole kind of discussion.

[00:14:01] And eventually one of the native Georgians in the room said, you know, I, I, I think that's actually somebody who's saying license in a rural Georgia drawl and Alexa is just not picking it up. And it was 15 in a row. You know, like this person really had a lot of trouble making themselves understood to Alexa.

[00:14:20] And this is an example of a situation where. Voice content is still this very new area. Voice interfaces are still very much in their infancy. And a lot of times, a lot of these problems and a lot of these issues that you encounter really aren't the fault of a designer or of an engineer or of the actual thing you're building.

[00:14:36] And it's really down to some of the deeper issues that are at the root of these interfaces. For example, the fact that Amazon Alexa can't understand somebody who's speaking Georgian English. So that was a really interesting measure of, of, of the performance of this interface. That was an example of how you can't necessarily assign blame to yourself for some of these things.

[00:14:57] It's not like web content where, you know, it's pretty clear where things end up and how you can control some of these things with voice content, this whole new area, you really can't necessarily chalk it up to something that you might've done. And it's an example of how a lot of these conversational interfaces and voice interfaces still can't really beat humans at our own game of conversation quite yet.

[00:15:20] Michael Meyers: Wow. A lot to pick up on and they're getting better and better. It's, it's pretty amazing. It's one of the things that just really resonated with me. You know, we talked in part one, which folks should check out about, you know, the effort and energy in creating and maintaining content sets, you know, and there's real value here.

[00:15:39] You know, if people are searching for and finding different content via the voice interface, and they are say via another channel, you know, your website that lends a tremendous amount of value and reason, you know, to, to invest in that and create that. And that's really great to see. There's a dude there's so much, I wish that we could talk about, but we're out of time, we need to wrap up. You know, th this was really great.

[00:16:03] I really appreciate you joining us Preston. I know you're super busy for the folks that are listening. Like I said, please check out part one. We've talked about an overview of voice content, information, architecture, usability, and the future, where all this is going.There's a tremendous amount of things that Preston mentioned.

[00:16:19] We put all the links in the show notes, check it out. If you liked this talk, please remember to vote, share, and subscribe. You can check out our past team talks, at and as always, we'd really appreciate your feedback, your topics, suggestions. You can reach us at That's tag the number one. com Thank you so much for tuning in until next time. Take care.