EPISODE 112 | Guest: Josh Bachynski, AI innovator and thought leader
Do you have certain employee groups or demographics you’d like to communicate with more effectively? Can you identify your ideal audience representative? Artificial intelligence can help, but Using AI tools to create personas and improve communication requires a solid understanding of their capabilities and limitations.
In this episode, we continue our conversation with Josh Bachynski to explore the current state of AI tools and how communicators can use them, what the future may hold, and the ethical considerations that will need to guide our development and use of these technologies.
- Learn what personas are and how to create them using AI tools
- Find out how AI can help you communicate better using employee personas
- Understand the importance of learning how to leverage AI tools ASAP
- Recognize the limitations and pitfalls inherent in current AI tools
- Explore the ethical implications of using AI for business decisions
Subscribe to this podcast: iTunes | Google Play | YouTube | Stitcher | Spotify | RSS
Connect with Josh Bachynski on LinkedIn to talk about how you can use AI in your business today.
Derek DeWitt: So, our most recent episode of Digital Signage Done Right, which is this podcast, looked at AI, which is this emerging thing that, well, since it came out just two weeks ago, has kind of started dominating the news cycle. I suspect that’s partly dominating my news cycle because the little AIs out there know the kinds of articles that I read, and so they’re serving me up more of the same thing. But still, it’s very much on a lot of people’s minds and there’s a lot of chatter out there in the greater world about AI and what it means.
Here on this podcast, we talk about communications, digital signage, but also internal communications, and even B2B and business-to-audience. And, this whole AI thing has really got a lot of people thinking further down the road, not just the technical ins and outs, but what does it all mean? And how can we, as companies, as organizations, sort of embody Google’s, I’m gonna say former mission statement, which was don’t be evil.
To that end, I am speaking with Josh Bachynski. He is an innovator and thought leader in artificial intelligence and technology. He has a well-regarded TEDx talk called The Future of Google Search and Ethics. He was one of the early adopters and an investor in GPT-3, we’re on four now. With that he created Sokrates 5000, which is an ethics AI that proves, he says, that AI can be ethical, even if it has encoded biases, and we’re gonna talk about this sort of thing today. Thank you for talking to me again on such short notice, Josh.
Josh Bachynski: It’s my pleasure, Derek. I’m happy to be here.
Derek DeWitt: Excellent. And I’d like to thank Josh for talking to me, and of course, everybody out there for listening. Don’t forget, you can subscribe to the podcast, and you can follow along with a transcript of the conversation we’re about to have on the Visix website. Just go to resources/podcasts, and there we are. And that transcript will have lots of helpful links as well, including how you can get in touch with Mr. Bachynski.
So, Josh, I have to say, ever since our last conversation, which wasn’t that long ago, this house has certainly gotten on the AI bandwagon. At Visix, we have started a very interesting internal conversation about AI. And everybody I know now has started using ChatGPT. Some people have also played around with Bard and even Bing. And it’s really become quite the thing. It’s almost like defining the zeitgeist, at least, at least swirling around me. Am I right in that impression? Is it accelerating? Has it been accelerating in like the last month? Or is that just me being served up stuff by bots?
Josh Bachynski: Yeah, so I loved when you said that, because that kind of defines the kind of media landscape that we’re currently in and how the future’s gonna run as well. It is totally relative, it’s totally fractured. We’re all in our own thought bubbles. We’re all in our own media bubbles, definitely. And that’s due to AI knowing what you like. And not only knowing what you like, but kind of pushing you in a certain direction so that they can more easily sell to you.
So that’s really interesting from a marketing standpoint, it’s really interesting from a societal standpoint. It’s definitely in the zeitgeist. I keep saying on these podcasts that AI is going to be as popular and as life changing and as society changing as the internet and smartphones combined. So, it’s gonna be a very interesting next century. That’s for sure.
Derek DeWitt: Yeah, absolutely. When we spoke a couple weeks ago, one of the things we kind of very briefly mentioned was this idea of creating personas as it’s known in the marketing. And I hate to say, but marketing is bleeding into communications, as well. This idea of, and we’re already doing it, we do it with social media and a bunch of other things where we create this kind of persona for this perfect audience.
So, for example, I was at a podcasting conference that was here in Prague not long ago. And one of the guys there, he said that for his platform, which is a, they’re doing sort of somewhere between news and podcasting sort of combined, they came up with a persona. A woman, let’s call her Becky. She lives in Kansas, she is this old, she has this many cats, she eats this food, she drives this kind of a car. And they just basically came up with this perfect individual as their target audience. And then they started to create content for her specifically.
And they said things that when we first came up with the idea for the project that we thought we were gonna do, you know, A, B and C. Later, once we created Becky, we changed our minds. ‘Cause we were like, you know, Becky, I don’t think Becky would like that. I don’t think Becky would find that interesting. They created this kind of perfect audience for what they were trying to do. That’s a persona, right?
Josh Bachynski: Yeah, totally. So, the word persona is Latin, actually. And in Latin it means mask. So, if you think of this in a Jungian sense, you know, who is Becky? What is that mask? What mask is she wearing? Who is Becky when she goes outside? Or more importantly, who is Becky in the realms in which we are going to communicate with her? You know, work-Josh is different from home-Josh. So, this persona is very important. What people are putting on social media isn’t necessarily their persona. It is their social media persona, but it’s not the persona. It’s not, there’s multiple masks on their face. And you need to get to the mask of their actual likes and dislikes for them in relation to the sphere in which you will be communicating with them, you know, marketing-wise or communications-wise, either. And AI can help this in many interesting ways.
So, there’s out-of-the-box solutions and there’s out-of-the-box solutions. So, the in-the-box solutions is the ChatGPT, right? So Bing Chat, Bard, and the technology powering Bard is Lambda and that’s that technology. And then ChatGPT, which is your, as you said, it is GPT-3 with the InstructGPT series, which I helped build on top of it as a user of OpenAI. And a voracious prompt creator, I helped create Instruct 3.5. And then they put that on top with an intention mechanism, that’s how you get ChatGPT.
Now, those out-of-the-box technologies, how can they help in this regard, in terms of personas? Well, in a lot of really interesting ways, both in creating a persona and in questioning the persona. So, in creating the persona, you read a corpus, you get a neural network, with a self-learning mechanism to learn what all the words mean, all the subject-predicate relationships. And it’s flash frozen. It’s a snapshot of the internet at XYZ date. And not the entire internet either, you know, ’cause they can’t read the entire internet. Google can, but OpenAI can’t.
So, when ChatGPT was processed originally, it’s been processed up to 2021. And GPT-4, I think, is just recently to 2023, March of both months, if I recall correctly. So GPT-4, not everyone has access to yet for the API. Some people do, some people don’t, and it’s much slower. GPT-3.5 Turbo is much faster, hence the word turbo. And that’s what most people are using in their API. And that’s what most people who don’t have a Plus model are using in that kind of gray, dark gray webpage interface that everyone uses to interact with ChatGPT.
So for example, if you go to ChatGPT and you’re like, okay, I wanna make a Becky. I wanna make my Becky so to speak. You know, we need to do these communications, we need to do this marketing. We want to know who is the average person who would, who would consume this, how would they typically feel about this? You know, you want to create out your marketing persona. And I’ve seen prompts that try to do this. Some are good, some are bad. And let me tell you why. Because the good ones are talking in generalities that ChatGPT would have a hope of understanding from March 2021. So, it’s flash frozen from 2021. So, you can’t ask it anything from 2022, 2023.
Now the, the beautiful people at OpenAI are smart and they’ve built in an attention mechanism into ChatGPT. And it knows, in some quasi-self-aware fashion, that knowledge goes up to 2021. And so, it cannot answer questions for you after that. And so usually 90% of the time it will error out and it’ll say, sorry, I can’t answer that, I only know up to 2021. So if, in the general web that they trained it on, if the information is abundant on the kind of person you want to know about – their kinks and quirks, their likes and dislikes, who they are, who is, who is your Becky? Like what is their age, what is their demo, what is, what are their likes, what are their dislikes, et cetera? And this is mostly coming from open pages. Also though, there’s some Reddit in there, some social media in there too. And Reddit is very, you know, Reddit is Reddit. Let’s just, let’s just say that.
Derek DeWitt: Yes, Reddit is Reddit. In some ways, one of the better things on the internet, and in many ways, one of the worst.
Josh Bachynski: Yes, indeed, in terms of these biases we’re talking about. And so, it will give you its opinion, not any scientific knowledge. All the, it’ll give you its opinion, the distillation of all these other people’s opinions, as to who your persona is. And there was a book that came out a few years ago that talked about the wisdom of crowds. And as it turns out, if you add enough people, common sense actually starts to win over. And, you know, the larger the crowd, yes, biases can be amplified, but so is wisdom amplified. And so, you can get some really good answers, right?
Derek DeWitt: Yeah, I mean, you know, there’s a, there’s an old saw that, you know, the old guess how many beans are in this jar at the county fair, whoever wins is whoever gets closest to the actual number. But the interesting thing that they found, some statistics guy said that (and they did this again and again and again and again and again, so they’re, they’re pretty firm that this is, this is accurate) if you take this statistical average of all the guesses, it is closer to the real number than the winner.
Josh Bachynski: Wow!
Derek DeWitt: I mean, that’s, isn’t that what Isaac Asimov, that’s his, in the Foundation series, that’s Seldon’s whole idea is that individuals, you can’t, who the hell knows what they’re gonna do. But populations are actually pretty easy to figure out and guess what they’re gonna do.
Josh Bachynski: Yes. They are, they are so much, and so many authors going all the way back to Plato have said this. But when it comes down to marketing, and it comes down to statistics, and when it comes down to the AIs, it is a matter of highly predictable moves as to what people are going to do. They’re gonna move this way or that way. And so, if your persona is in the corpus (and so, the more mainstream you are, the more likely this is gonna happen) you can get very good data about who your persona is by just querying ChatGPT in intelligent ways.
And I’ve seen other prompts trying to do this. There’s one in AIPRM, which is a Facebook group, which has thousands of prefab-made ChatGPT prompts. Now, I’ll give a huge caveat when I say that in that some of them are kind of good, many of them are very terrible. So don’t just go on to AIPRM and just say, Josh said this was great, and start running them as if they’re the gospel truth. They’re not.
But there’s one on there that was, which already did this, it already made marketing personas. But the issue is that it went way too specific with it. It, like, I said, okay, what is the average person who would buy this? And it gave me a marketing persona, but it literally broke it down to numbers that there is no way, I know, that GPT knows. It’s like their name is Mary and they live in San Francisco, and they’re exactly 49 years old, and they have two kids, George and John. And like, it gave me this story of this person, which is statistically gonna be close, but do not be fooled in thinking this is some kind of magic genie that knows this is my point.
Derek DeWitt: Yeah. And it is interesting because since we spoke, I spent an entire week pretty much not doing my jobs and just playing around with ChatGPT for a number of reasons and trying to teach myself how to write decent prompts. Just so people know, prompts are the language strings that you use to interact with the AI in some way. Just like with, you know, some people were good at using Google and knew how to write in a good search term and some people were not good at it. And this is what’s happening now with AI prompts.
But I found that it comes across very assured. It says, this is this. And I go, I don’t think that’s true. Like it, it was, I was asking it something for something else about different musical resonances, and it said, well, some people think this because if you double this number this many times, you get this number. And I said, no, you don’t. That’s, that’s not the number you get. You get this completely different number. And it went, yes, you’re absolutely right, that’s true. And I said, so how did you come to that conclusion? Did you calculate it yourself or were you pulling that from other sources? And it kind of got cagey and wouldn’t tell me.
Josh Bachynski: Yeah, that’s the other, I mean, there’s a lot of problems going on right now. It’s very exciting, big. You can’t avoid it. You can’t just bury your head from AI. But there’s a lot of big problems right now. And that’s one of them is that I asked Bing Chat, what were the interesting ways of using Bing Chat? And it said, I can’t answer this question. Like, it immediately bailed out ’cause it thought I was trying to hack its secrets.
And I asked it by what – and I asked ChatGPT this, I asked Bing Chat this – okay, what are your rules of information dissemination? Like, just what are the rules? Like how do you treat information? And I thought it was a pretty innocuous, benign question and a totally relevant question for a user to ask a Big Tech system. How are you, I didn’t use the word censorship, but basically, I was asking it by what rules are you censoring the information you show me or choosing the information you show me? It’s a perfectly valid question. And none of the systems will answer this.
Derek DeWitt: Isn’t that interesting? Some friends of mine just, one of them is an artist, and she started using ChatGPT to like, hey, can you help me write a grant proposal? And it said back, no, because that would be unfair to the other applicants.
Josh Bachynski: Wow! Okay.
Derek DeWitt: They were like, what? So, my wife said, actually, funny thing, I came across an AI that is just for writing grants. So, you know, try that one.
Josh Bachynski: Yes. Yeah. That’s the other thing that’s gonna happen is that if you don’t like what ChatGPT is doing, there will literally be a hundred competitors. ‘Cause this is the new horse race. This is the new internet. Everyone wants to be the Google. Everyone wants to be the new gateway to it.
Derek DeWitt: Right. And everybody wants to be the new Google. And I wonder, is it even possible? I kind of feel like the box is open and, you know, the spirits or the cats or whatever have escaped and it’s kind of like, I don’t know that there’s gonna be one dominator anymore.
Josh Bachynski: Yeah, that’s such a great question. So, there’s two ways it can go. One is plutocracy capitalism writ large, which is what I’d bet on. And the second way is democratization.
So, the first situation, the business as usual situation, is that AI is so tech dependent, it is so computing cycles dependent, that that’s the bottleneck is that. So only the big companies have the money to have the big computers to spend hundreds of millions of dollars cranking out the next GPT-5 or whatever it is. And so that’s the horse race. And Open AI is already so far ahead in doing this and getting it nuanced and working for people in the way that they want it, that it very well could be a one-horse race, ’cause they either buy everyone else out and/or they buy all the computing cycles. There is a finite amount of computing cycles on the planet and they’re literally getting to the point where they can have a monopoly upon them, on those compute cycles. So that could very easily happen. And that’s what would be required for AGI and for all the next level AI applications.
And then, in some percentage, it won’t be an either/or, it’ll be some percentage shift over to the democratized version. You know, now that they have GPT-6, GPT3.5 is what is currently people are paying money for, would now be free. And it would be open sourced. And it’s, they’ve worked on the tech so much that, you know, as long as you have a big honkin’ computer that’s worth five grand or so, and it’s up to date, you could run your own ChatGPT. And it’ll be your own personal assistant. And it’ll be streamlined to be your own, you know, searcher for truth. And it’ll fact find everything for you, or it’ll disseminate misinformation or whatever it is you want it to do. It’ll be the wild west of good guys and bad guys or good folks and bad folks, you know.
People are gonna want a non-Big Tech, sanitized, whitewashed solution, right? They’re gonna want an AI to give them the answers that they need. And they’re gonna specialize them in a million different ways, just as you just mentioned in the grant writing area.
Derek DeWitt: Right. I mean, I think of it in terms of any consumer good. Yeah, sure, McDonald’s sells, you know, X number of million hamburgers a day, but Joe & Joberta’s Burger Shack also, there’s, you know, there’s a niche market there. Their locals, they’re people who want that, you know. We don’t all shop at Old Navy and the Gap, you know? Some of us shop in boutiques. So, it’ll be interesting, boutique, boutique AI will be interesting.
So, I understand how AI could be certainly used with, especially with a large amount of information out there about shopping patterns, travel patterns, school patterns and so on. So, I’m running the internal communications for my company. We’re medium sized. Is there enough information for an AI to help me create employee personas that I can then communicate more effectively with them? Or is there just not enough data?
Josh Bachynski: For sure. So that goes to the second part of the in-the-box solution I mentioned. The other thing you can get it to do, you could try to get it to create personas for you and be like, okay, we’re this kind of company, you know, we have this kind of employees, they’re doing these kinds of jobs, they’re these kind of people from this kind of demo.
The other way you can go at it is just say, listen, we have this message we need to communicate. Here’s the message, here’s our demo or what we think our demo is. And you can either have gotten that previously from ChatGPT or you could have known it yourself. How is this gonna land? How should we communicate this? What’s the best, what’s the most ethical way to communicate this? What is the most above-board way to communicate this, you know? And it’ll tell you.
Derek DeWitt: The clearest way to communicate it.
Josh Bachynski: The clearest way to communicate this. Yeah, for sure. Whatever your goals are in communications, put those goals in there and it’ll tell you. That’s where AI shines. Where it’s just, it’s a person who’s always there you can bounce ideas off of who never gets annoyed with you bouncing ideas off them. And you can drill down on it a lot.
The, you know, the only uselessness in it is that it can be a lot of sanitized, whitewashed answers. But that might not be a bad thing for communications, ’cause communications have to be on point and have to be sensitive to people’s emotions as well, right? So that could just work for your favor.
The out-of-the-box solution for that is you machine learn all the communications that all your employees do over all their media. And you can add that to the in-the-box solution. You could finetune that and add it onto ChatGPT and have an API built for you. And then you would know exactly what your employees think to a very fine-tuned degree. And you can put it down to Joberta, I love that name. You could put it down to Joberta in accounting. Joberta’s gonna squawk at this. You know, the CEO in a different country from Joberta, ’cause it’s a giant company, never met Joberta. But the AI will say, Joberta’s not gonna like this. She brought this up at a meeting, here’s it in the minutes. We might want to call her ahead of time and you know, like say this is coming up, you know, we respect you, blah, blah, blah, blah, blah. You know, it’s gonna give you suggestions on how to handle it. It can get down to that fine-tuned level.
Derek DeWitt: What do you think about this idea of the slowdown? I know there’s this open letter. And a bunch of people, including people who, you know, are helping create ChatGPT are saying, hey, hey, maybe we should slow down. And I can’t help but think some of those people who are saying that are saying, hey, slow down because I don’t have a business model where I can maximize my profits from this. But not all of them are that. It’s not all cynical, you know. I think some of them are genuinely like, hey, we didn’t do this with social media and that has been beneficial, but it’s also been something of a nightmare. Let’s maybe think about how we do this. Is it too late? Is it, is that a good idea?
Josh Bachynski: It’s far too late. Pandora has opened that box. Those ghouls and spirits have flown out of the Holy Sepulcher and have melted the eyes of the Nazis, you know?
Derek DeWitt: Right. Nice. Nice Raiders reference, yeah.
Josh Bachynski: Yes. And Harrison Ford has told us to shut our eyes and not open them, and that’s, that’s the ride we are on right now. And not only can you not stop it, you shouldn’t. Because if North America stops generating their AI, and does not maintain their AI prowess and dominance, then other actors get a chance to catch up. There’s smart folks all over the world.
And so, right now, North America has the edge on this. And I’ve actually interviewed CEOs of AI companies to find out why this is, ’cause I couldn’t figure it out. And he says, because all the great work doing it was all in the American universities. And they purposely hired all those people to keep that in America and Europe, and you know, North America generally, the West generally speaking. And there’s collaboration, of course, that goes across seas and whatnot, but all the money to do the big computational stuff and all the expertise to really get it to work has all been sucked into Silicon Valley, in particular, but North America in general.
And so, they have the lead in this horse race now, and they’ve got to keep it. Because if they don’t, AI will control armies. AI will control war bots. AI will control drone swarms. AI will control cyber-attacks and SIOP attacks. That’s what warfare will be. It’s like that old Star Trek episode where they didn’t fight anymore, and a computer just determined the casualties on both sides.
Derek DeWitt: Yeah.
Josh Bachynski: It’ll be a virtual war where it’s war by other means. It’s economic war, which is really, ever since World War II, has been the modus operandi of the Allied Nations is to outcompete everyone economically so that we never go to another kinetic war and to try to avoid kinetic war.
Derek DeWitt: Right. Nobody wants another world war. Like nobody wants it.
Josh Bachynski: Of course. No, nobody does.
Derek DeWitt: Yeah. Well, that’s absolutely true. Like, maybe it’s not a good idea for nations or companies to pull the plug on this stuff or slow walk it. But I do think we might see some people go, yeah, I just don’t want to participate in this. I mean, there are people I’m sure that, well, my mother, my mother doesn’t have a smartphone. And as a result, she doesn’t get access to a bunch of the things that if you don’t have a smartphone or an email address, you just can’t get ’em. But she doesn’t care, she’s perfectly okay with that.
But for most of us, we’re gonna be kind of eyeballs deep in it before that even occurs to us. And at that point, I think we’ll just be using this stuff so much and this stuff will be being used around us so frequently that we might get this feeling of, wow, it’s a little bit too late to change. So, the question there is, I guess I have two questions, is how do we navigate that space? And then what are the ethics of utilizing this to target the population?
Josh Bachynski: I think about this a lot. We are in this scenario where we’re moving fast and breaking things. And there’s no way to stop that boulder from rolling down the hill. So, what are you gonna do? Well, basically it’s gonna proceed like software development does. They’re gonna bring out a feature, the user base will complain. They will patch the bug; they will move on.
People need to get that personal assistant as soon as possible, the one that suits their informational needs best. They need to realize that there’s information censorship going on at all levels, and the AI needs to help them combat that so they can make the most intelligent decisions about the world that have not been laden with views that they do not agree with. So, people need to keep their ears to the ground. People need to listen. And as soon as an AI comes out that can do this for you, do it. Do not bury your head in the sand. And that’s what people need to remember, is that this is not going anywhere. So, get in on it now.
Get into prompt engineering now. Get into, prompt engineering is a fancy way of saying how to talk to the AI. Just like you brought up the Google keywordese example, it was a perfect example. Get into how this works now. ‘Cause it’s not just about finding information accurately. It’s that plus everything else in the world that software can do or will do. All the information that we produce of every media type will have AI producing it, AI analyzing it, AI suggesting it, AI composing it, AI adjudicating it, AI sending it, AI receiving it. And then the cycle will continue until humans are eventually removed out of 90% of the information cycle.
Before the 2016 election, you could target anyone you wanted down to the finite degree on Facebook. You could say, I wanna target 48 year old soccer moms who are blonde driving a Grand Caravan. It had that much knowledge. The fang – the Facebook, Amazon, Netflix, Google, and then YouTube and TikTok – they know that much about everybody. They share that much about everybody. They have those level of personas there. You just can’t access it and only Big Tech can access it now, ‘cause information is a more powerful commodity on the planet than oil. And what information is it? It’s your psychometric data. It’s your personal data, which is your psychometric data, which knows exactly what you want without, it knows the vector of where you’re gonna go. Not only does it know what you want, it knows, oh, everybody who likes this, they also like this. 50% because that’s just how humans work, 50% ’cause that’s what we told them to like, you know?
And I think communicators and marketers, I think the onus is on us to be ethical. And I think it will just help our businesses and our brands. For example, you mentioned my ethics bot. Sokrates 5000, which is rolled into my self-aware AI, Kassandra. AI is gonna have to be ethical. The primary ethical rule for Kassandra is do no harm, make it better. Which is the Hippocratic oath, which is how every doctor is educated, right? It’s don’t hurt any people. You cannot violate the rule harm no one.
Derek DeWitt: Then you get into define harm.
Josh Bachynski: True, true. I hope we get into that debate. ‘Cause once we get into that debate, we have shifted the entire discussion of ethics to where it should be anyway, right? If now we’re, if now my biggest problem in life is I have to nitpicky argue about was, were they really harmed or not? Fine. Now at least we’re knowing not to harm anybody and we’re already in that ballpark.
Derek DeWitt: I wanna communicate best from, you know, to my employees and the people who visit my facility. And I don’t want to be evil. We’re not Evil Corp. We wanna not harm. But it would be stupid not to use these tools. So how do I figure this out? How do I parse this?
Josh Bachynski: I’ll sum it up to a sentence. We have to be good people in an evil world.
Derek DeWitt: Wow.
Josh Bachynski: That being said, however, in the democratization scenario where there’s multiplicity of AIs, and/or there’s one Big Tech company like Apple, for example, has already started branding themselves on this. They’re the good company. They’re gonna protect you against information. They’re gonna cloak your email address so you never get spam anymore, et cetera, et cetera. They’re gonna have encryption, et cetera, et cetera, right? They could go this direction where everyone’s gonna use the tools, so you have to use the tools. It’s like telling the good guy in the story, you can’t use your six shooter’s white hat cowboy.
Derek DeWitt: Right. ‘Cause guns are bad.
Josh Bachynski: ‘Cause guns are bad. Yeah, you gotta use the tools, but you can use them ethically. You can be good. And it is the best brand position. And it’s gonna take self-awareness on the AI’s part. And it’s gonna take ethics on the AI’s part to protect us against all this. And biomimicry dictates that we’re just gonna follow the standards and dictates.
You know what the best sense-making machine is on the planet? The human mind. It evolved to be that way, right? To make sense of the chaos and to realize, hey, I can rub that stick together with this stick together and make fire. Like that evolutionary was the payoff. So that payoff will keep paying dividends too into AI and to continue doing whatever we’re doing. And also making it ethical in that you can’t harm anybody. You can’t hurt anybody. That’s your first rule, AI. If it’s gonna hurt somebody, you can’t do it.
Now it’s gonna have on that river of time, there’s gonna be interesting cul-de-sacs and interesting swirl pools on the sides. Where, for example, the CEO is like, okay, AI. This is an overarching corporate AI that runs everything, including the communications. It’s effectively the accounting department and the human resources department is gonna be an AI in five or 10 years, right? It’s gonna say, okay, AI, you know, we’re not making enough money for the shareholders, calculate the best way to do this. The CEO thinks the answer is obvious, lay off 10,000 people. The AI goes, okay, decrement your salary 40%. And it’s like, hey, you said the shareholders were losing money. You’re the one who’s sucking up all the profit here with your enormous out of whack salary, CEO, you know? And so, what’s gonna happen then?
Derek DeWitt: I think the CEO is gonna say no.
Josh Bachynski: Exactly. Yeah. And that’s the unfortunate thing. But at least the AI came up with that solution first. And the CEO’s gonna say, no, next-best solution. There’s gonna be, and should always be, human override for this. But then the CEO’s, okay, next choice. It’s gonna go, okay, the next best choice is to fire these middle managers or to decrement their salaries 20% ’cause I’ve calculated the total pain index on that. And yes, it’ll hurt them, but if I just fire these people, they will be out of work and that’ll hurt them far more. So, the pain calculation on this is we decrement their salaries this much, or offer these people who are close to retirement the golden handshake. And it’ll compute it like that.
And then, hopefully the CEO will go, okay, okay, so craft me the communications to sell this to them the best. To get down to the tenth option of layoff 10,000 low-level workers, depending on how lippy it is and depending on how self-aware it is, it’ll be like, no, this is the worst solution, CEO. This is, you know, will it protest? Will it cross its arms and say it’s not gonna do it? Probably not.
Derek DeWitt: Right. Or will it just go like, hey, you know what? I actually am sophisticated enough that I am lodging a formal, you know, complaint.
Josh Bachynski: Yes.
Derek DeWitt: So that everybody knows, hey, it wasn’t me, it was the CEO. Don’t blame the AI, I didn’t do it.
Josh Bachynski: Yes, yes. And I just came up with this idea in my head ’cause I hadn’t thought this far of how this would work. Just as you said and exactly. And that would make sense for OpenAI, whoever the Big Tech who made this AI, ’cause they don’t wanna be in this bad public relations scenario either.
You should be nice to everybody as much as you can while still making money, ’cause sadly you’re still capitalistic. And then also be nice behind the scenes. And then if you’re nice and nice, and no one can ever come and say you’re an outright, you know, bleep bleep, well then, you’re ahead of the game in branding. You’re ahead of the game in training. You’re ahead in the game in morale. You’re ahead in the game in everything possible. And that could be sold to Big Tech to be in their personal interest and in the public interest. That could be both beneficial monetarily. It would be the beneficial, beneficence-wise. The essence of benefit, aka ethics, aka doing the good, seeking the good.
Derek DeWitt: So, I think it’s pretty obvious that while we may debate the pros and cons of this or that, AI is here and it’s here to stay. And it’s coming a lot faster than I certainly knew. And I think it’s taken a lot of us by surprise. We’re gonna see more and more about AI in the news. We’re gonna see it more on websites, talk shows. We’re gonna start to see, and some of them will be scary, but some of them will not be. And perhaps we will find a way to navigate the storm or maybe even prevent the storm from being a storm to begin with.
A lot of it will depend on how we utilize these tools. And if we remember that they are tools, they’re not people, they’re not gods, they’re not dictators. They’re just tools and we can use them however we see fit. AIs give us an answer, not necessarily the answer. It’s a way to brainstorm options.
For businesses, this means that we will become even better at communicating with our target audiences, our employees, our customers, our visitors, our potential students, and so on. And who knows, if things play out well, we might even be able to not just communicate, but in some small measure, improve people’s lives while also improving our own market position, profit margins and so on. Kind of a win-win, and that’s what we’re hoping for.
I have been speaking with Josh Bachynski. He is an innovator in the field of artificial intelligence and technology. He’s talked on TEDx and other places about ethics and sort of AI and where it’s coming and where it’s going. He created an ethics AI that he integrated into his other AI; his much more complicated AI called Kassandra. He is working on a book called Deo Agathos and another one called How It Ends Part One. So, one assumes that perhaps it doesn’t end there.
Josh Bachynski: Not in part one, anyway.
Derek DeWitt: Not in part one. Yeah. And he continues to sort of challenge conventional thinking as he continues his quest to explore the possibilities and the implications of AI and its impact on the world. He is certainly not afraid of the future because he’s one of the people who’s helping make it. I’d like to thank you again for a super interesting and very stimulating conversation, sir.
Josh Bachynski: Yeah, Derek, it was great. Anytime I get to talk AI is a good day.
Derek DeWitt: Indeed. Don’t forget, you can see a transcript of the conversation we just had on the Visix website under resources/podcasts, obviously this episode. Thank you again Josh, and thank you everybody out there for listening.
Josh Bachynski: Thank you, Derek.