Voice-Activated Signage Explained

EPISODE 39 | Guest: Trey Hicks, chief sales officer, Visix, Inc.

Interactivity is becoming ubiquitous in digital signage, and today there’s a new way to interact with the screen: voice interface. Yes, just talk to the screen, ask for what you want to see, and it shows you that content.

This hands-free method is perfect for today’s audiences because it mimics smart speakers and other digital assistants we already use at home. And at a time when things like germ transmission are on everyone’s minds, it offers a hygienic way to get what you need without undue risk.

  • Learn about increasing engagement and lowering health risks with voice-activated signage
  • Understand the difference between Natural Language Processing and speech recognition
  • Hear the easy 1-2-3 of setting up voice interaction for any screen
  • Think about adding voice to meeting room signs
  • Make video walls even more impressive with voice
  • Explore what the future of interactivity may hold

Subscribe to this podcast: Podbean | Spotify | Apple Podcasts | YouTube | RSS

See voice-activated digital signage in action: https://youtu.be/wGpkRwr5AqU


Transcript

Derek DeWitt: There’s an old Chinese proverb that says, “The tongue can paint what the eyes can’t see.” That sounds almost like a perfect metaphor for podcasts: reaching out electronically across distances and spaces to you, the listeners; hopefully affecting you in some way (some positive way) and maybe even encouraging you to act or behave differently.

To extend the metaphor, digital signage is no different. There’s a new way to interface in the game and that is voice-activated signage, and so we’re going to talk about that today. To that end, I am here with Trey Hicks, chief sales officer for Visix. Hello, Trey.

Trey Hicks: Hello, Derek. How are you today?

Derek DeWitt: I’m pretty good. Thank you for coming on the podcast and talking to me today.

Trey Hicks: Thank you. I’m glad to be here.

Derek DeWitt: Excellent. And thank all of you for listening.

Derek DeWitt: So, yeah, hey, Trey. We’re really taking the whole social distancing thing pretty far here, don’t you think?

Trey Hicks: Yeah, a couple of thousand miles, I guess

Derek DeWitt: Yeah, it’s something like 4,000 miles or something. That’s pretty, pretty distant. You are in Atlanta, Georgia. I’m here in Europe, in Prague, but we’re using our voices and electronic devices to talk about things.

Trey Hicks: Very appropriate for a podcast about voice today.

Derek DeWitt: Yeah, that’s exactly right. So, the first thing I want to get at, we’ve talked a lot in the past here about interactivity. You know, how we have interactive screens in our pockets. We use them every day and how it’s getting to the point where every screen we encounter, we sort of expect some kind of interactivity. And when we don’t have that kind of interactivity, we just kind of pooh-pooh it. You know, it seems very old fashioned.

When we move it now to the next level, which is voice-activated signage. I mean, this is interactivity, right? I mean, maybe we go back a little bit further and maybe you could just give us a quick rundown from your perspective on, say, the benefits between interactive versus static signage, in general.

Trey Hicks: Sure. Yeah. Glad to. You know, and you mentioned, you know, our smartphones. Steve Jobs said that the iPod gave us a thousand songs in our pocket. You know, the smartphone kind of really gives us the world in our pocket.

The exciting thing about adding voice to digital signage, so that you can use your voice to request content and information from the screen, is that we’re going from a static screen that has kind of been scheduled to show you a set cadence of information… You know, in the past, you just kind of had to stand in front of the screen and wait for the information that you’re hoping for to show up on screen, or information that’s relevant to you, to come up on screen.

Derek DeWitt: Yeah. I always think of those sushi restaurants, you know, with the sushi coming by on the boat, as you sit at the counter and you’re like, “Oh, I just can’t wait for the one that I like, you know, the hamachi to come on by, because that’s the one I want.”

Trey Hicks: That’s right. That’s right. Of course, when we added interactivity to digital signage, then that enabled us to, like with our smartphones, just go get the information that we want to see; just pick it, choose it, see a menu of content and information, and be able to go right to it.

And, Derek, you touched on the fact that, because of our smartphones, we are used to being able to get to that information anytime we want to. As I said, we’re walking around with the world in our pocket. So interactivity gives that to us with digital signs, you know, which are more often than not mounted on the wall. And what’s really exciting about voice is, it gives us another way to interact with those screens without even touching the screens.

Derek DeWitt: I don’t know. Would you say that it’s a more natural way to interact with the devices that inform us?

Trey Hicks: I think it can be, you know, for sure. You know, many of us have digital assistants in our homes and you know, we’re getting used to those. At my home we use them to turn on the lamps, or to set the temperature in the family room, you know, that kind of stuff. It’s so convenient.

So to be able to talk to digital signs and ask for directions, or ask what the weather is like right now because you’re about to go outside, you know, you’re about to drive home, or you want to see the traffic. So having the convenience of being able to talk to the screen so that it can help you out just like Alexa might in your home is opening up a whole new world for digital signage and just making it much more useful to passer-byers.

Derek DeWitt: Yeah. Like you mentioned, I also have a smart speaker at home and I notice that it saves me time. Like, for example, if I wonder, “Oh, I wonder what time it is in Tokyo.” I just ask it and I get the answer immediately. Instead of me having to walk across the room, get my phone, wake it up, go to the Google app, type it in and then get my answer. I mean, it’s not a ton of time, but still the voice interface saves me time.

Trey Hicks: Absolutely. Yes. You know, in this day and time, and for good reasons, we’re concerned about germ transmission. So like with our phones, if we have another option for interactivity beyond touch, that we can talk to the screen, then that gives us a way to pull up helpful information, you know, without even walking completely up to the screen. We can just talk to it and ask it for a floor map or for wayfinding information so that we can find our doctor in a hospital, in a healthcare clinic.

Derek DeWitt: Yeah. You mention hospitals and healthcare clinics. I mean, obviously don’t want to be wiping down the screen every few minutes. And look, I know a lot of people out there have kids and that’s great, kids are wonderful. But they’re also a little disease vectors. And you know, when kids get access to interactive screens, they have a tendency to get kind of grimy, from my perspective.

Trey Hicks: They do. Yeah. And who has time, to your point, to go out and wipe the screens once an hour? By bringing voice to the equation, you know, what could be called a voice user interface or voice-activated signage, we can now use our voice in that healthcare situation, you know, to ask for a directory, a floor map, or simply how to get to a cafeteria.

And the use cases quickly go beyond healthcare. In the corporate world there’s information like directories, wayfinding, schedules (you know, certainly a lot of meeting schedules that people are wanting to see; they’re looking for their meeting rooms), KPIs, dashboards, and, you know, even entertaining content. You know, whether you want to check sports scores, see the latest news, all that information is available to you just with an ask.

Derek DeWitt: And I mean, is it a matter of, you’ve already sort of instructed the software to listen for certain keywords? So when you say those words, it then sort of thinks, “Oh, okay, now I’m supposed to show this.” I mean, you can’t really walk up and have a free form conversation with it. Get advice on dating, things like this. Not yet, anyway.

Trey Hicks: With Visix, there are two different ways that we are approaching it. One is with our voice user interface that we’re using for custom interactive content like wayfinding, donor boards, directories, that kind of thing. And you know, when you look at what voice user interfaces typically do, in this direction, they are using Natural Language Processing via the cloud to process or to understand the content or information that you’re asking for. And so, they’re processing your ask, and then once they understand that, they’re putting the information that you’re asking for on screen.

Derek DeWitt: Right. Provided they’ve been sort of linked together at the backend.

Trey Hicks: Yes. Yeah. A voice user interface like that, that is using Natural Language Processing has the added ability to understand multiple phrases that are asking for the same thing. If we just simply want to see what the weather forecast is for this afternoon, you know, we could say, “What’s the weather like?”, “Is it going to be hot today?”, “Is it going to rain?” You can use different kinds of phrases to get that same information.

Derek DeWitt: And when you’re setting it up, do you have to encode every, or guess, every possible permutation of that question, every different way it could be asked, or is it using this Natural Language Processing to sort of guesstimate your intent?

Trey Hicks: Well, there are some limitations there with digital signage because, you know, unlike your smartphone that is tied into multiple apps and directly to the internet and all that stuff, for digital signage, you do have to take in consideration what information you actually have available to the user.

Any digital sign may be ready to provide a passer-byer, you know, 15 different things on screen: news, weather, directories, wayfinding, list of doctors, whatever that may be, but you do have to match up the possible phrases and voice requests with those content destinations, if you will. So, you do have to figure that out.

So, with two different approaches, you know, one approach is, is to use Google Cloud, Amazon or other cloud-based Natural Language Processing systems to process phrases. With that you’re going to need an internet connection for that digital sign, 24/7 pretty much. And you do have the additional burden of letting the system learn what phrases people want to use for the available content. And when you go that direction, you know, usually there’s some labor hours involved to get everything set up and tuned.

Now there’s another approach to using voice with digital signage. And that is more of an approach where we’re just recognizing keywords as you speak to the sign, or to the media player that’s driving the digital sign.

Derek DeWitt: Right. And this is what you guys call the Voice Recognizer Widget, yeah?

Trey Hicks: That’s right. And so, what’s great about the Voice Recognizer Widget is that we were able to build it into the media player. It’s self-contained. It’s leveraging voice recognition technology that’s built into the operating system, into the OS. So, this means that the media player can listen for your voice and listen for keywords, without a dependency on an internet connection.

And what it’s doing, it’s really easy to set up. First, you set up what you want your wake words to be for the digital sign: “Hey, sign” or “Show me”. So those wake words tell the media player that you’re about to request information or content.

And the second part to this is that you put in those keywords for content and for information. So your keyword for showing the forecast for this afternoon may simply be “weather” or “forecast”. And once you put that in, then you just pick what content in your digital signage that you want to display on screen, and in this case, it’s weather. You simply need your wake words, your keywords for the kind of content that you want to make available by voice. And that’s it.

And because this approach with the Voice Recognizer Widget does not require an internet connection, it’s also more secure. Because in this case, since we’re recognizing or listening for certain words that you may say as you speak to the screen, we are not, in this case, transmitting your voice and little recorded snippets to the internet, to the cloud, to be processed and then brought back down.

So, with the Voice Recognizer Widget, everything’s self-contained. Now there is a difference here in that by using just voice recognition and not Natural Language Processing, the Voice Recognizer Widget can only simply respond to the right keywords that you may say to ask for directions or….

Derek DeWitt: Right, right. So, you you’ve really got a nail that wording when setting it up.

Trey Hicks: You really do, you really do. What’s really cool is we were able to build this technology into our offering with our media player software without adding any additional costs. You know, so you don’t have to worry about monthly fees or anything of that nature to support voice recognition.

Derek DeWitt: Ah, so it’s just, it’s part of the Signage Suite software now.

Trey Hicks: It is, it is. Which is exciting. Which means immediately, you know, a university that’s using one of our media players, can set up that media player to where a student can walk up and say, “Where’s the shuttle bus?” and we’ve set up that widget, that drag-and-drop widget, to listen for the key words, “shuttle bus”, and then immediately we’re pulling up on screen a live map of where all the shuttle buses are on campus. So the student, you know, can wait to go outside, into the heat or the cold, until that bus is about to pull up. And all of that is possible without touching the screen and just by a simple request, “Where’s the shuttle bus?”

Derek DeWitt: Hmm, so that’s interesting. So, the keyword, in this example, the keyword is “shuttle bus”. That’s specifically what it’s listening for. So if I’m standing in front of it and I could say, “Hey man, show me the next shuttle bus.” Or I could say, “Do you know when the next shuttle bus is?” Or “Hey, uh, show me the shuttle bus schedule.” And in each of those three cases, it’s only hearing the keyword “shuttle bus”, so there really is kind of a wide range of conversational variants that could be used and it would still work perfectly, still work fine.

Trey Hicks: That’s right. So, it is a bit forgiving that you could say, “Show me a shuttle bus map”, or you could say, “Where is the shuttle bus?” As you said, Derek, it’s just listening for those keywords, “shuttle bus”, to know that you’re looking for that shuttle bus content.

Derek DeWitt: So, I mean, is there any kind of a limit to what we can put on there? I mean, it seems like anything that could be made available through a touched interactive screen could be made available using voice interfaces.

Trey Hicks: Yeah. And that’s something that’s really cool about it. In fact, you know, if you think about a PowerPoint, a PowerPoint presentation is made up of slides. And with this approach, the digital signage content can also be put together in slides.

You know, when you walk up to the digital sign, you’re seeing, essentially, the primary slide, the main slide or the default slide. You know, so you’ve got announcements, what’s going on today, the schedule for this this afternoon and “Oh, by the way, this information is due by 4:00 PM today” – all that great information. When you ask to see the shuttle bus map, then what your voice is doing is requesting a different slide that is already teed up and ready to go, and we’re just switching slides on screen.

So, you could have a wide range of content, as you said, teed up and ready to go to serve that user, that person, with information that’s going to be helpful to them.

Derek DeWitt: Is there any kind of a limit to, I don’t know, say the number of different voice commands that you could encode into it?

Trey Hicks: No, there’s not. There’s not.

Derek DeWitt: So, there’s no max. Like I could have a hundred if I had that content ready to be displayed.

Trey Hicks: Sure. Yeah. Um, you, you could set up a hundred different ones, but I mean there is a practical limit to how much content that you would put together, you know, for one digital sign, but there’s no fixed limit.

Derek DeWitt: You know, it kind of brings to mind what I sometimes refer to as the playlist issue. You know, sometimes I’ve come across places that, you know, that have all this cool content available. They have event schedules, they have shuttle buses, they have weather, they have news. And they want to show it all. So they just kind of cram all this stuff into a single playlist. You’ve got one playlist showing 60, 70 different messages, which means it takes 10, 12, 14 minutes for a message to come back around again, as it repeats through this very long playlist.

But this seems to me like a way to sort of embed that volume of content without constantly showing it all the time. So you can have that 60 or 70 pieces of content, but it’s not all in a playlist. Instead it’s just kind of waiting for someone to ask for it. And the people sort of almost create their own temporary, tailored playlist based entirely on the requests that they make of the sign.

Trey Hicks: Yes. Yeah. It’s kind of the best of both worlds because just by walking by a digital sign information that the organization believes is useful to you is already being presented on screen. You know, it’s being presented often, yes, from one piece of information to the next: a rotating presentation of content.

At any moment you can, by touch or voice, go straight to other content that you want to see. So you can continue to watch the presentation of content, you know, that your company or your school believes is useful to you. And at any moment you can go and access other information that you need more at that moment.

Derek DeWitt: Yeah. And you know, one of the buzz words you often hear in digital signage writing and so on is this term “timely”: “timely content”, “up-to-date, timely content“. Well, when a person requests the information and then less than a second later, it displays on the screen, I mean, it doesn’t really get more timely than that.

Trey Hicks: Yes! Because digital signage can be very dynamic and integrated with lots of different data and information systems. When you request information, you should be seeing the very latest.

Derek DeWitt: Right. So, I mean, are there technical requirements, are there technical limitations, to using this technology? Like how do I get this up and running at my place?

Trey Hicks: Yeah. Great question. So fortunately, the technical requirements and what you need to start using voice with your digital signs are pretty low, and what you need to set this up is pretty straightforward.

So, we have built support for voice and voice recognition directly in our media player software, so that’s already there. You know, there are no extra fees there or anything. So once you have a media player in place with the digital sign, one thing that you need to add is a microphone.

Derek DeWitt: Does it matter what kind of a microphone? I mean obviously we’re not gonna, you know, obviously we’re not going to stick a microphone that you’d use, say, at a radio station or a professional recording booth. We’re not going to have that giant thing hanging over the screen.

Trey Hicks: The right mic for the environment will depend on a lot of factors. How large or small is that room or an environment? How noisy is it? Visix depends on its local integration partners to recommend different microphone models for clients’ specific environments. Our local integration partners are glad to help in that way, as they do with other A/V technologies to recommend the right microphone set up for each environment.

Once you have a media player and you have a mic in place (and by the way, the connection for the microphone to the media player is simply just a USB connection) you have now an interactive environment for your customers or for your employees, your students.

Derek DeWitt: Hands free!

Trey Hicks: Hands free. Derek, one of the most exciting things about adding voice to digital signage is now you can turn any static screen into an interactive screen.

Derek DeWitt: So, it doesn’t already have to be a touchscreen.

Trey Hicks: Not at all, not at all. This is really the first time that, or the first practical way, that we can make a static screen interactive. And it’s really exciting because, while the cost of interactive screens through the years have come down, they still cost us a couple of times more compared to a static screen. And it allows you to make screens that you already have on the wall and already in place immediately interactive.

Derek DeWitt: Well, that seems very cost effective.

Trey Hicks: Yeah. Yeah. So we’re very excited about the fact that now any screen can be made interactive so that it can be more useful, can be more like the smartphone.

Derek DeWitt: Now, somebody there at Visix was telling me that the Voice Recognizer Widget also works for your Touch room signs. Is that right?

Trey Hicks: Yeah, absolutely! You know, you can use your own voice to talk to your meeting room sign and to request information from it. So it can be something as easy, Derek, as you walk up to a meeting room sign and maybe you’re new to that facility or to that building, and you ask where the restrooms are, and we can pop up a map with lines drawn to show you immediately how to find the restrooms.

Derek DeWitt: The path from that sign how to get there.

Trey Hicks: Yes. Literally just talk to the meeting room sign. You could ask for a quick phone directory. You know, maybe you’re having trouble in the meeting room with the video conferencing equipment, whatever it may be. So, you can ask for a quick phone directory, so you can call IT.

And with concern about germ transmission and that kind of thing (this year in 2020) you can add content that’s very helpful to people. Like, you can ask for the health guidelines for the room and we can show that on screen. And we can immediately show you what the new seating capacity is of the meeting room based on current health guidelines. And we could show a seating map so that you can see that this room that used to offer seating for 12 people now only accommodates six people.

And one last thing, you can talk to the meeting room sign to indicate that the room needs to be cleaned. You know, maybe local protocol is that after each meeting, the room has to be wiped down. So just after your meeting, as you go out the door, you can speak to the meeting room sign and say, “This room needs to be cleaned.” The appropriate message will come up on screen to let people know that this room is waiting to be cleaned.

Derek DeWitt: So, yeah – therefore, don’t go in it yet!

Trey Hicks: That’s right. That’s right. We can make the screen red and, and, you know, ask people to not enter until it’s green again.

Voice can not only be used for digital signs, but it can be used in other areas as well as a means to add interactivity in different places. So, we’ve talked about the fact that we can make any screen, any static screen, interactive by adding voice commands. You know, support for voice-activated content. We can use voice with video walls. Which is really exciting.

Derek DeWitt: Really?

Trey Hicks: Yeah. So, you think about video walls. You know, a lot of times video walls are in the nicest entrances to corporate, university, healthcare, other facilities. So you have this giant wall of displays with, a lot of times, really impressive content, but no way to interact with it. In fact, to add touch interactivity to video walls is really expensive. You know, most video walls are made up of multiple displays, or at least multiple visual components, and to have touch with that is really expensive. So, the exciting thing about adding voice to video walls is, in a very inexpensive way, you can now talk to the giant video wall and pull up all kinds of information.

Now, one thing that we should talk about, Derek, with voice-activated signage is, we do need to communicate to people how they can request information. You know, how they can talk to the digital signage. And that certainly is the case with a video wall. So, when you walk up to the video wall, we need to put little bubbles or captions around the video wall, prompting people with what they can request.

Derek DeWitt: Right? Hey, do you want this? Then say this.

Trey Hicks: Yes. Yeah. If it’s a giant video wall in a university setting, students aren’t going to know that they can ask for the shuttle map and see a live map of where every shuttle bus is on campus, you know, unless something tells them this. So, you can put prompts to speak to the video wall or to the display around, printed on the wall, if you will, with nice graphics, or you can put those prompts right on the digital signs themselves.

Derek DeWitt: Right. So, it’s just part of the content that you’re displaying that comes up in the playlist.

Trey Hicks: That’s right. And you want to do the same thing with meeting room signs. With the option to use your voice with meeting room signs, we need to communicate to passer-byers what information is available to them just with simple voice commands.

Derek DeWitt: You know, I got to say, I mean, I was born in the late 60s and, I mean, we’re not quite at Blade Runner flying cars yet, but a lot of this stuff kind of seems to me like the future, you know? The 21st century is very much becoming this kind of science fiction environment from my youth. And this is just where we are now. And considering what’s going on, say in the year 2020, what do you see happening in the realm of interactivity next?

Trey Hicks: I think we’re going to see more integration and sharing of content between smartphones and digital signs, as we move forward. A quick example there is, and something that we’re already doing today is, that you walk up to a digital sign, you pull up a map or directory, because you’re looking for someone in the building. You know, you’re trying to find them. You pull that information up on screen and you can now transfer that information from the digital sign right to your mobile device.

Derek DeWitt: Right. Yeah. And sometimes even like turn-by-turn directions or something like that.

Trey Hicks: Yeah, exactly. And I think what we’re going to see is more integration between the smartphone and the digital sign moving forward. I think we’re going to see voice user interfaces and voice-activated signage even get better. You mentioned before about voice being a natural way to interact with the sign. I think we’re only going to see that improve over time, just as we have with the digital assistants in our homes.

And I think the digital sign will continue to mimic the possibilities with our incredible smartphones more and more over time as well. We have apps for the digital sign. We have touch interactivity. We have support for voice commands and voice-activated content. In the future, we might want to support gestures with smartphones and digital signs on the wall as well. Not only can you talk to the sign, touch to interact with the sign, you can also interact with your hands, gestures, to pull up content as well.

Derek DeWitt: Yeah. We all saw that in that film “Minority Report”, which I keep bringing up on this podcast because, you know, hey, it was cool. And that was a real company way back then, so this is in fact being developed. And of course, gesture interfaces are language free. So you don’t have to speak a language at all. You can just gesture to it, which again, kind of opens up the possibilities. I don’t even have to speak the local language in order to get what I need.

Trey Hicks: You know, I think digital signs are just going to get better and better at providing practical and useful information to the local audience as we move forward leveraging deep integrations, high interactivity, and very strong presentation capabilities to bring eyeballs to those displays and to cause people to want to interact with them.

Derek DeWitt: Voice is a natural solution to a lot of the issues that the ADA guidelines were designed to address. And the more people who want to interact with that digital sign, the more valuable that digital sign becomes to the organization. I mean, you could almost call it like, I don’t know, a cost savings factor or something like that. You’ve already paid out the money for the equipment. You’ve already taken the time and paid somebody to do that, to set it all up. Those are all sunk costs, that money’s gone. Obviously, it doesn’t make it more monetarily valuable, in a money way, but it does increase its value. It continues… it gives more and more and more with no additional outlay in funds.

Trey Hicks: It does. It really does. But yeah, we are increasing the value of the digital side because we are increasing the possibilities for the amount of content that we can provide our audience at the digital sign, and giving them more ways more choices for accessing that content.

Two things that always stand out with digital signs in an environment, Derek, is that, it’s one of the few channels for sharing information that are left, where the information there is just what the organization wants to share with its employees or its students.

If you look at other communication channels or pathways, like email or webpages, you know that kind of thing, it’s crowded with information. You know, the minute you open a browser, you can go a thousand different ways. But those digital signs on the wall, those are your billboards. Those are your dedicated pathways for communicating information that’s specific to your employees, to your students, to your patients.

Derek DeWitt: Right? Yeah. It’s a more focused communications method.

Trey Hicks: Yes. And that’s really unique today.

Derek DeWitt: You know, I wonder if some people might not find that kind of refreshing.

Trey Hicks: Yeah. Because when you look at a digital sign in an organization, you know it’s strictly only going to be content for your organization. You know, you don’t have to worry about YouTube ads popping up on your screen or anything like that.

Now that interactive is so much easier to set up, no matter if it’s for touch or for interactivity driven by voice commands, it’s just so easy to give people the information they want on demand.

Derek DeWitt: Or maybe even information that they didn’t know they wanted, but it turned out they did. I wonder if future generations are going to have reactions when this stuff becomes commonplace. You know, like my generation, we grew up with rotary phones and no remote controls for televisions, and younger people say “Really, you had to stand up and walk across the room and change the channels? That seems crazy!” I wonder if there’s going to be a day when people look back at us and say, “Hey grandpa, is it true that you used to have to touch the sign in order to get what you wanted? Or I even heard a rumor that there was a time where you just stood in front of it and looked at what it was presenting and had no way to interact with it.”

Trey Hicks: Yeah. And it is funny that buttons are kind of going away. The Apple phone, when it first came out, had one button. Now it has no buttons (well, I guess the screen is one big button, one way to look at it). But the technology continues to evolve quickly. And I think voice is here to stay.

 Voice will continue to be, probably at an increasing rate, a way to interact with our homes, with our cars, with our smartphones, and certainly with our digital signs. You know, and as we move forward, that expectation of any content that you like at the moment that you want it is only going to increase. And we certainly want to support that by giving users options and multiple ways that they can pull up content that they need at that time, the content that they need at that moment.

Derek DeWitt: Voice-activated signage, whether through VUIs (voice user interfaces) that use the cloud and Natural Language Processing, or using the somewhat simpler voice recognition software, with preset wake words and keywords… and who knows where that technology is going to end up going in the next 10, 15 years? But voice interactivity is here to stay.

Trey Hicks: Yeah, that’s right. That’s right. You know, we’ll see what else from “Minority Report” kind of creeps into our environments. Hopefully only the good stuff.

Derek DeWitt: Yeah. No pre-crime. Pre-crime is a bad idea and an unworkable concept for a civil society.

Trey Hicks: I would agree!

Derek DeWitt: I’d like to thank you very much for talking to me today, Trey.

Trey Hicks: Sure. Thrilled to be on the podcast and very excited about voice as an option for interactivity and how it can turn any screen into an interactive screen to serve customers better.

Derek DeWitt: Okay. Well thank you, Trey. And we’d like to thank all of you for listening.