Lead With That: What Artificial Intelligence Can Teach Us About Human Communication & Development

Lead With That: What Artificial Intelligence Can Teach Us About Human-Centered Leadership & Communication

Is artificial intelligence coming to replace our jobs? What place could AI have in the work we do? What are the human aspects of leadership? These are just a few of the questions that come up as breakthroughs in AI are making headlines.

In this episode of Lead With That, Allison and Ren are diving into the exciting world of artificial intelligence and its impact on leadership, communication, and human development. They discuss the groundbreaking work being done by OpenAI, including their famous language model, ChatGPT, and the new AI-powered language model, Bard, and lead with that.

Listen to the Podcast

Join CCL’s Ren Washington and Allison Barr as they discuss the exciting world of artificial intelligence and its impact on leadership and human development.

Interview Transcript

INTRO: 

Welcome back to CCL’s podcast, Lead With That, where we talk current events in pop culture to look at where leadership is happening and what’s happening with leadership. 

Ren:

This week we’re diving into the exciting world of artificial intelligence and its impact on leadership and human development. We’ll be discussing the groundbreaking work being done by OpenAI, including their famous language model, ChatGPT, and the new AI-powered language model, Bard. (Recently, OpenAI made headlines with the release of its latest AI-powered language model, Bard, which has been receiving widespread attention for its ability to generate highly creative and engaging text.)

This week, we’re exploring the impact of AI-powered language models like Bard and ChatGPT on leadership and human development. We’ll be discussing the potential that these models hold for transforming the way we think about language and communication, and how they may shape the future of leadership. So, join us for a thought-provoking conversation on the cutting edge of AI and its impact on the world of leadership.

Welcome back, everyone. I’m Ren Washington, and as usual, I’m joined with Allison Barr.

Allison, what’d you think about that introduction?

Allison:

It was very human-centered, Ren.

Ren:

It was human-centered, you say?

Allison:

Yeah. Let me be more honest. What did I think about your intro? It was very Ren, it was very you.

Ren:

Was it?

Allison:

Yes.

Ren:

Okay. See, this makes me a little nervous. I have something to admit to you and all of our listeners. I didn’t write that intro.

Allison:

Shocking. Who wrote that for you, Ren?

Ren:

Who wrote that for me? So, wait, hold on. Since you know my bit and you know me, did you really…? I mean, was it that unremarkable? It sounded like Ren!?

So, everyone out there: ChatGPT — this AI chatbot that we’re going to be talking about today — wrote that introduction, with some prompts, and I’ll talk to you about it. But Allison, did you hear it, and did you think this is a CCL trick, or did you think, ah, it’s like, as unremarkable as any intro?

Allison:

Well, I didn’t really think either, because to me an intro is an intro, and generally, while you don’t say the same thing for every single intro, it has the same general flavor.

Ren:

Yeah. So, I tried, I debated whether to “flavorize” this AI chat with “Ren,” and so it was interesting, as we dive into this, maybe we’ll do a little bit of a primer, and then I can talk a bit more about how I crafted the introduction. But for those of you who don’t know yet, or haven’t heard, and likely you’re hearing all sorts of pings around these AI chatbots, likely ChatGPT, but maybe most popularly recently, Bard. Now ChatGPT is a chatbot. Like, you go onto a service and this thing pops up, “Hey, do you need help today?” I mean, at its very simplest, that’s the premise. But ChatGPT was created by OpenAI, this group of designers — actually, one of our favorite people, Elon Musk, was on the board of one of the creators.

In 2019, Microsoft invested a billion dollars, or so the plan was. A billion dollars over a series of years to develop this AI chatbot. Most recently, Google countered in their beta testing what they call Bard, it’s their counter chatbot. There are some differences, and you probably heard, Allison, that Bard took a hit, because in its opening Tweets, being asked a question in the demo before they could take it down, it gave a little bit of incorrect information about the wrong telescope that saw the wrong thing at the wrong time. So, those are the background there.

But it gets me started to think a little bit around, Okay, there’s a billion dollars being invested in this technology. I’ve interacted with these chatbots. They’re a little bit more than just a very simple call-customer-service thing. I mean, there’s these interesting algorithmic expressions of human language and creativity. Have you played around with any of them?

Allison:

I have, yes. So, after—

Ren:

Well, I guess you can only play around with ChatGPT because Bard is beta testing for very specific people. So, sorry. You were saying though?

Allison:

That’s okay. Right. So, Bard is not widely available just yet, but ChatGPT is, and I went in with a colleague of mine and we were asking it various leadership questions. We were asking it various design questions.

So, for our listeners, Ren and I both design custom leadership programs, and so I went in and I said, “Design me a women’s leadership program that’s 3 days long,” just to see what it came up with. So, that was interesting. That was interesting. What about you?

Ren:

I have done almost the exact same things. Most recently, I tried to create a podcast introduction, as we know, and I know we’re going to really double-click on that kind of thing around creative design or the impact of that. Maybe this is a good connection, because as I talk more and more around these things, I think these tools currently might only be good as the person using them. So, you craft, you said, Build me a women’s program, and you said it was interesting. What was interesting about it?

Allison:

Well, there’s a couple of things that were standout to me. Primarily, if I went into Google and searched women’s leadership programs, maybe not in the same order, but the same topics would likely come up. So, in that respect, it wasn’t necessarily anything unique. However, what it did was take it a step further and from a design perspective, put concepts in order that were logical, and they were logical, and crafted loosely, some sort of timing around it as well. So, it took it a step further, which is still not enough to be a design.

We know there’s a lot more behind that, in terms of leadership development programming. But I will say, as I understand it Ren, it’s an endless library. It can write legal documents, it can write school lessons, it can write any kind of programming. It takes seconds to do it. That was another standout. It took the snap of the fingers for it to craft this leadership development program. It passed final exams for MBA programs and U.S. medical licensing exams. So, you didn’t ask me any of that, but what I will say is that it’s pretty remarkable and efficient.

Ren:

Yes. I share the sentiment around its efficiency. I think I’ve told someone, because I’ve done what you did of, build me a program, and sure it can give me an outline with some language that I’ve seen before and some concepts. But I said, if there was a championship for writing an outline for a 2-day program, just the functionality of it, that thing’s going to beat me every day of the week, because I can’t type that fast, nor would I have even the early cascade around, okay, well, let me think about this. My human excitement over the opportunity would probably get in the way. I would fall into the Mark Twain quote, “I was going to write you a short letter, but I ran out of time, so I wrote you a long one.” So, I think it raises an interesting question. Did you ask any follow-on questions when you were doing the design queries? Build me a program and then say, don’t do this or do that?

Allison:

Yes. Funny enough, funny or not funny, so this was pretty early on, and in the first 5 days of it being launched, it was swarmed with over a million people, and so it timed out. Which I thought was interesting. So, we attempted to ask a lot of follow-up questions and were unsuccessful. I’m not sure what happened. So, maybe it was getting overloaded.

Ren:

Well, that’s definitely the case. I mean, its traffic can be wildly heavy on there. Sometimes you can’t access the bot. I think they’re asking for usership now, build an account, you can get access. But then that starts to feed into some of the “self-learning,” which I’m doing air quotes, but we’ll talk about a little bit more. But the reason I asked about the follow-on prompts is, so in order for me to put together that intro that you heard, I asked 4 questions.

My first question was, write a podcast intro for a show about leadership and leadership development, with this week’s topic being about OpenAI, ChatGPT, Bard, and artificial intelligence’s role in leadership development and human development. That’s what got us the first 2 sentences, “We’re diving into AI and we’ll be discussing groundbreaking technology by OpenAI, including their famous language model.”

I didn’t say “famous language model.” The computer decided to call itself famous, by the way, which is funny. Then I said, okay. Well, I asked a series of follow-on questions, 3 more questions. Add an intro about topical news stories. It gave me another spit out. I said, okay, well let’s focus the conversation though on ChatGPT and Bard, because it gave me an AI frame before. Then after that question I said, okay. Well, then add in a recent news story because it didn’t give me that. That’s where, after those 3 extra prompts, we finished with “OpenAI making headlines, and AI-powered language models.” But I found something that happened, and if you’ve been yelling at us for the past 10 minutes on your listening device, you may have heard me say, “Recently OpenAI made headlines with this release of the latest AI-powered language model, Bard.”

But OpenAI doesn’t have Bard, that’s Google’s and Lambda’s tool! OpenAI powers ChatGPT. So, in a series of questions, 4 questions, the AI model, now this is ticky-tacky, but the AI model got confused about itself! It referred to OpenAI as Bard and ChatGPT being a tool that it used to create. I’ve got the screenshots if you @me. Don’t @me, @Allison!

But the reason, the point that I bring that up is because, and when exploring the impact of these things, I think the tool might only be good as its handler. Because you said it could pass a Wharton School exam, and it got a B minus or a B plus in that professor’s test. But I think about … I’m unsurprised because the professor probably has … That’s probably more of a commendation on the professor’s ability to write questions.

So, what do you think about AI and it being smarter than people? Are you concerned?

Allison:

Well, it depends on how you define smart, because there’s IQ, there’s all types of different kinds of intelligence. In terms of IQ, I’m not sure if it’s smarter than humans, it probably can be. But in terms of EQ, I’m not so convinced. I think there are a lot of great benefits to having this type of technology, including task minimization for one.

For reference, for those listeners out there, when Ren and I do programs, when we design programs, we have to write what we would call a staff guide. It’s a run-of-show essentially, and it is the bane of my existence.

Would I like AI to do that for me? Yes, I would. Yes, I would. I would happily allow AI to do that for me. I think there are some things it can do for leaders that will accommodate more time for things like vision, strategy work, culture work. I think it can make the workplace more efficient in a lot of ways.

Of course, there’s some red flags that have very quickly been raised. I don’t know if we want to go there yet, but there’s some red flags as well, for sure.

Ren:

Like what? Yeah, hit us with some red flags.

Allison:

Well, Amazon raised a big alarm recently when a response from the AI closely reflected confidential company information. What they found was that their inputs may be used as training data for further iterations of the technology. Given that, it may inadvertently share data with competitors and others, so it does raise the question, how can organizations keep their business safe and their data safe? It also found that in the wrong hands, AI could reverse-engineer a company’s product for its competitors. So, I don’t think the limits are fully understood, which could be dangerous.

I think that legal issues are sure to ensue. There’s not really any oversight or safety measures, right now at least. There aren’t any current regulations. However, Pandora’s box has been opened, so I’m not sure how this is going to go.

Ren:

That’s so interesting, because I am a little less concerned about some of those red flags. I do think there’s probably an interesting challenge with publicly accessible data. You said that ChatGPT can get you all the answers you could hope for, and generally that’s true. Though ChatGPT is currently bound by data through 2021, and doesn’t have all the access that Bard does. Bard has all the Internet’s access, and because they’re Google’s tool, they have a lot more data that they can get to. Because of their massive data collection, that has an advantage over some of these other platforms. But if you think about what’s publicly accessible, Amazon’s saying, “Hey, that reflects some confidential information.” I’m like, yeah. It’s likely because that confidential language is somewhere out there. I mean, when I built the outline for a program, the introduction was interesting.

It even built me an introduction, like the first 30 minutes. Welcomed the participants, and this is for the participants to get used to the goals and objectives. I said, Yeah, almost anyone could do that. Now, granted, it typed it faster than I could, but it likely sourced some public information or something that it combed through in the algorithm. But a little less concerned I am about it reverse engineering products, because it would require a very adept and effective person to … You can’t chat in ChatGPT, Tell me what Coca-Cola’s secret ingredients are. You could have maybe a series of questions that could identify some of the chemical activities or components that happen to be in it. But I guess I keep on coming back to, I’m not super concerned yet, because it requires the use of an effective operator. Let’s take you for instance. If you would use ChatGPT or the like, to build an ISG [the staff guide Allison mentioned], doesn’t that make you obsolete?

Allison:

No.

Ren:

Yeah. Why not?

Allison:

Why would it?

Ren:

I don’t know.

Allison:

Well, tell me … Tell me what prompted that question?

Ren:

Well, no. We do the same job. I sure hope it wouldn’t make us obsolete. But just for argument’s sake, I’m curious. What would you say to someone if they’re listening and they don’t know ChatGPT, they don’t know what you do, and they’re like, “Oh. Allison just told me that a computer can do the thing she hates to do the most, or something that she’s required to do in her work. Why do we need Allison?” What would you say to someone who said that to you?

Allison:

Well, that’s about, gosh, maybe 1/8th of my job. So, there’s context and there’s certain human interaction that occurs when I am speaking to my clients in looking to craft the most bespoke custom engagement for their very specific needs. So, AI can’t read body language, for example. There are times when I’m in client calls, where I might say something like, “There was a long pause before you answered that. Is there something else happening? Is there anything else you want to tell me?” Sure enough, there’s more there. So, that’s one very small example. The ISG that I referred to is a schedule, that’s what it is. There’s no emotion, there’s no empathy needed, there’s no happiness, there’s no emotion in an ISG. It’s simply a schedule.

So, if you think of that from a bigger picture, and I’m being specific to my job, but if we could get AI to write our ISGs, do you know how many more clients I could take on? Because the time that it takes me to write an ISG, it’s tremendous. It’s too long. So, from that perspective, if I’m looking at it from a bigger picture, that helps my organization, because then that frees up more time for somebody like you and me and the rest of our colleagues to take on more clients.

Ren:

Yes. A lot of what you said, I absolutely agree. I was talking with one of our amazing colleagues, shout out to Liora Gross. We went to get coffee out in the land here, and we were just having this interesting conversation. We were talking about some of this stuff too, and I was about to tell her, well, wait till AI can paint a picture. I barely said it before I realized, No, computers are painting now, but they’re painting with prompts though. I have yet to see, now this is when Skynet happens, when they become self-realized, and then it starts creating its own existence. But currently, when I think about its abilities, or like you say, there’s some art in the spaces in between the work that we do. Or in reality, often we talk about this in the context, and you said it, EQ, emotional intelligence. We do an exercise with professionals where we ask them about their most effective and least effective bosses, and then how they would categorize those skills.

Bosses, if you’re listening, this might be important to log because there’s usually 3 buckets people put in: It’s IQ, are you intellectually capable? Technical skills, do you know the know-how of the job? Then EQ, are you emotionally intelligent? Typically, Allison, maybe you’ve seen this too, most leaders reflect that most of those skills, and the very best and very worst leader they’ve ever had, fall into that EQ bucket. I often tell leaders too, it’s like, What gets you in the door and what keeps you in the door? Your history and your experience and some of your tech skills will get you there. But then when you start operating, people are like, “Hey, Allison. What do you think about Ren?” “Well, as you can see on his resume, he’s really good with Excel spreadsheets.” I never think you said that to anyone before. Either you like him, or you don’t. So, there is that art in those spaces that I definitely empathize with and that resonates with me.

Allison:

Yeah. I will further validate your point by saying, I found not only CCL’s research, but Harvard Press and Harvard Business Review, as well as McKinsey, found that EQ is more valuable in the workplace than IQ. So, it begs the question for me, how important are those technical skills if we’re moving into a new era of the workplace? It’s too soon to tell probably, but I suspect a shift will come, in terms of what we look for in, quote, unquote, “leaders” for a workplace.

Ren:

Some of those technical skills just may very well be the management and ability to use new technical advances. Now, I saw this post on LinkedIn, and I couldn’t agree with it more, and I’m sorry, person, I can’t remember who you are, and I promise I’m thinking about looking it up! But something he said is, We don’t have to be afraid. AI is not going to replace you, but your inability to use AI will get you replaced. Someone who can, let’s say to use your example, someone who knows how to harness all that ChatGPT can, to automate some of those parts of the job that we don’t enjoy or that’s really, really monotonous, or just perfunctory, like schedule-building, then that person might be a step above their peers or colleagues or competition, because they don’t know how to use that tool that well. So, I think there might be a fourth bucket, which is the ability to weave all those things together. Again, then going back to the human’s role in the use of this tool. But what happens, Allison, when we start teaching it to read emotion? What happens to us then?

Allison:

There’s too much nuance there. That’s my opinion. Again, you take that for what it’s worth. I believe I am an expert in the area of human behavior. I am not an expert in AI. Human beings are way too complicated for a computer to do that. That’s my opinion. Now, we can rerecord in 5 years and you can correct me when your co-host is a robot and not me.

Ren:

Yeah. Firstly, I just want to say to my robot overlords, Allison’s opinions are not my opinions. I think you’re shiny and great. Well, I think it’s interesting when you think that human behaviors are nuance. How did you become an expert in human behavior?

Allison:

We could claim for one, my master’s degree in psychology. You could also attribute it to many, many life experiences. So, education’s one of them, but it’s not everything.

Ren:

Your experiences and the way to make sense of those experiences.

Allison:

Yeah.

Ren:

You mentioned one of the red flags being AI’s ability to start to track or maybe reverse engineer these things, or to do some self-learning, like you said. Amazon’s issue was that it might reflect confidential information, because the prompts are used as data-feeding, and that’s 100% the case. ChatGPT, I was using the Jan 30 version to write the intro. The way that it keeps updating and learning is, you ask it follow-on prompts. Even next to the entry points, there’s feedback. You can say, Did you like it or not? No matter what, if you liked it or not, you’re asked, What was the correct answer? Now, that can very quickly turn into a Wikipedia situation, where you have less capable people writing in inappropriate and incorrect answers, which is something that ChatGPT gets dinged for.

I guess it can make up stories or make up facts. But I mean, hey. It sounds like it’s apt to be president. It’s hard to say really the definition of some of these nuances. Then I think, again, back to this thing that, what happens when we teach these things more, and its self-learning capabilities, especially on the impact of leadership and leadership development. There is a real question that people like you and I have to answer around, What’s the impact of true democratization to leadership? I could go on ChatGPT right now and ask, How do I do my performance management and feedback conversations?

Allison:

I would argue that there’s nuance there. Depends who you’re talking to, what’s going on with them. It could give you a frame, it could give you the most up-to-date framing, and that’s very helpful. I’m not denying that. However, it’s complicated. Let me give you an example.

Ren:

Yeah.

Allison:

So, I was talking to a friend of mine, she does not work for CCL, she’s an HR consultant. For example, let’s say you’re a manager, Ren, and somebody discloses to you, just casually, that they have anxiety, diagnosed anxiety. You are required to verbally provide some sort of reasonable accommodation. Now, that changes then how you would approach your performance evaluation, or performance conversations. Whereas, the next person that you manage hasn’t disclosed that, but you might suspect they have anxiety. There’s too much context to have a simple black and white answer. Again, you might even say to ChatGPT, Can you tell me what reasonable accommodation is for somebody with anxiety? It might even say, send them to HR. Well, what if you don’t have an HR department? Some organizations don’t, so it’s too complex.

Ren:

Yeah. Well, I mean I can’t help … I’ve got a big smile on my face. Because what you’d have an opportunity to do is those things. You’d be able to ask it those follow-on questions. So, look, I’m not trying to knock us out of a job. I think you and I agree that I’m still more capable than ChatGPT, and I think it’s, well, in some regards, everyone, in some regards I’m not. I think in regard to sense-making and execution, and you highlighted it. They’ll give you a laundry list of things to do. You could even ask follow-on questions about what happens … I think this person might have anxiety, what would I do then? ChatGPT will give you an answer, and likely an answer that you’d say, Okay, that makes sense.

Allison:

But if your company, if CCL has a certain policy, I’m getting very specific here but it’s highlighting my point though. There are legalities that are broadly known. There are policies within organizations that are private. So, ChatGPT is not going to know that.

Ren:

Well, yeah. Okay. So, you’re saying, if the computer doesn’t know my company’s policy, they can’t give me a precise answer?

Allison:

Right.

Ren:

Yeah. Yeah. Again, I think you and I agree on the spaces in between. There are limitations to binary thinking and we know that in human context, and then we see that in mechanical context, where they’re limited by the functionality that they have. Then again, limited by the person asking the questions. I would even further say that, someone would only be able to scratch the surface and might even do more harm than good, because they might not even ask the right question.

Allison:

Right.

Ren:

They might say, this person clearly has traumatic stress anxiety, and they type it in because they know, because we know people know truths about everyone when they’re in the workspace. So, people are like, “Oh, I know Ren had this experience in his life.” Then the computer would be like, Cool, heard. Here’s what you should do. Then I’m armed with the wrong information. That continues to be, I think, the current differentiator between where these things go and where we are now. I mean, in leadership development, sure, I could ask ChatGPT to write me an email on these new policies. I likely will if I had to do policy emails. But how to get people to buy into those policies, how to have them talk about those policies, how to adjust and amend those policies in the real arc of an organization, I mean I guess you could have the browser up and hope that traffic doesn’t get busy before your answers stop coming.

Allison:

Yeah. I mean, for sure. There’s 2 things I want to say. One is to touch on something you said maybe 10 minutes ago, was that you’re not concerned about legalities. That’s intriguing to me because this is a tool that has endless capabilities, and we don’t know where that ends. So, mark my words, I think there will be tremendous amount of regulation that has to happen, because there will be some serious legalities that happen. So, we can revisit that when the time comes.

Ren:

Yeah.

Allison:

Also, organizational researchers have been very vocal about shifts needed in leadership as of recent disruption and constant change. I mean, change is constant no matter what, but if we go back to 2020 to where we are now, everything that I’ve read, everything that CCL has researched and found, and all the organizational research that I have found, has been so vocal that we need to rethink effective leadership and move toward a human-centered approach, more of a human-centered approach rather, and embrace skills like empathy, humility, adaptability, vision, agility, and really deep engagement with peers and those that we lead and manage.

AI, again, I think it’s a balance. AI is great in a lot of ways, and it’s only going to highlight the need for these qualities even more. Part of the essence of leadership is to help others achieve a common goal or shared purpose. We talk about that a lot. However, the interpersonal qualities and nuances are needed to do this.

We talk about this all the time. Research from CCL, research from a broad spectrum of organizational institutions, found that these traits I already mentioned, like I said, are twice as important as IQ when it comes to effective leadership. So, with that said, leaders being task masters and leaders being these authority figures, I think will likely become less needed. Which is great. Those qualities like empathy, humility, vision, agility, engagement, those will be necessities moving forward. So, I think it’s just going to be a shift for better or for worse. But what’s interesting to me is what leaders will do to navigate that change, and to get people on board. You said something very important, which is to stay on top of these trends and to at least be able to speak about them, and to know what their function is, and what they do.

Ren:

Critical. I couldn’t agree more, the importance that you’re highlighting around the human component of the world, and the work we do. Still currently, even though these bots exist, 99% of organizations, business, is run by people. So, those spaces that we need to tap into, I think, are going to make the difference between good and great. I do see a place for these things to help someone get good.

Part of the ethos of why we have this podcast is to get access to people who would never normally get access to CCL. I think for someone who’s thoughtfully using ChatGPT or the like tool, and they were to say, “Hey, how can I be a more emotionally intelligent leader?” They could likely get some really great things that they might not have access to. Now, granted, we are talking about a privileged world where people have access to internet and computer, which we know is actually not really the case for a whole bunch of Americans. So, it already stratifies this kind of new education.

Something about the legalese, and maybe I misframed or misnomered, there’s a couple of reasons why I wasn’t really concerned about the legal implications is, because those organizations that have written these privacy policies, have already passed muster. In fact, you, and likely listeners, go through this every time. Now you see a banner on every new webpage that you go to, around, Hey, we use cookies to track your data. Everyone knows that they sell cookies because we’re the product, and what do most people do? They hit the Accept all cookies because the browser is obnoxious and it’s in the way.

So, I was just saying that I don’t think that these things are going to fall in any spaces because they’ve likely already gone through the hurdles to please the interests and the politicians that are navigating this new language around, Hey, you’ve got to make sure people know that you’re selling their information. ChatGPT tells you up front. Every little bit of conversation that you engage with with this computer is used to further its intelligence. Welcome to the club. It’s like, come join and build, and never mind that you’re going to build an AI system on my IP or something like that. So, I think it’s more of a conversation of, and you said it too, these leaders who are navigating and using these things for their benefit. I’m worried about nefarious people using these things to work faster than people who are smart, willing and capable, who don’t have access.

Allison:

Two things you just made me think of. For one is, well, before we started this podcast, I told Ren that I was reading the most up-to-date news stories on this right before we started recording. There’s so much happening that probably by the time this podcast comes out, there’ll be a lot more information around this. So, that’s the first thing. I read an article from a law firm that said clients had been asking them to reduce their rates because they can find legal answers on ChatGPT. I know again, this is a very specific industry, a very specific example. However, I wonder about the economic shifts that will happen as well. Again, economists has long predicted that AI would produce some shifts in the economy, that’s not a surprise. But you alluded to this earlier, Ren, does this mean that some employees will be displaced? It’s probably too soon to tell of course.

However, it’s definitely going to cause shifts in the way workplace cultures manifest, as well as the behaviors and the skills that a leader needs to accommodate this, an economic and workplace change. I think that’s probably something that a leader can at least focus on now, is anticipating those changes and listening to these economists and organizational developments, leadership development spaces, who are very vocal about this and been. The other thing I was going to say about that is, you mentioned something about access too. I was reading another article on the benefits, and finding that some people who don’t have access to mental health care, for example, were finding the ability to get a diagnosis, unofficial of course, of what they’re going through, so that they can at least better understand how to treat some of the symptoms that they are going through. So, again, it’s like anything. There are pros and cons to anything and it’s going to be all about how we choose to move forward with it.

Ren:

Well, as we head to the back half of our convo here in the end, I do want to think about how we choose to use this and what’s next. Because some of the things you highlighted there are really interesting. I work with a group of lawyers and have for quite a few years, and this idea of this ChatGPT legal advice is something they’ve had to deal with. I think it’s actually currently okay, legally allowed and permissible in certain municipalities and states even, for Google to practice law. For these web things to practice and represent those. It’s interesting, again, when we think about how these people are utilizing them, because what a powerful story about someone who doesn’t have access to information, that can help maybe make sense of their experiences. Then we know the WebMD phenomenon.

Allison:

Yes.

Ren:

Where you type in, Oh man, my toe hurts, and by the time you’re done, you have Stage 4 cancer of some kind, and you’re like, Holy crap, my life is over. So, I can only imagine a world where someone is going down the wrong rabbit hole with one of these things. So, I guess then it’s, for me, as we pull back and I think about, what does this mean for you as leaders, what does this mean for the world next? Just responsible use. I think to ignore that these things are coming, is irresponsible. To think that these things are the answer, is probably as irresponsible. I think you highlighted, maybe some people might get their jobs changed as we’ve been talking about.

I think it’s only those people that are unaware of the intersections of work and AI, they’re going to be … Or at the forefront of it. I mean, those things are real. There are certain industries that are really going to be hit by AI generally. But currently, for elevated sense-making, for connection between big and small ideas, I think I’d love to work with ChatGPT. We don’t have to be enemies. I think it could make me better. I’m not scared of them yet, but I do think that with our powers combined we could be an effective team.

Allison:

I couldn’t agree more. It makes me think of polarities, and at CCL, we talk about polarities. The best example I could give is, it’s both/and thinking, it’s you need to inhale and you also need to exhale. So, you need the human-centered leadership. You also need the technology to back it up. So, will this radically change “leadership in the age of AI,” as I’m air quoting here? Maybe, but it’s not going to be any different than the shifts that occur year over year. It will be different in how they manifest of course. But things change all the time. Leaders need to be adaptable and agile constantly, that’s nothing new. However, I am curious to see what happens to the, air quoting again, “hard skills” that organizations tend to focus on or prioritize, especially in a hiring process. As companies start to adopt these types of technologies, workplaces are naturally going to endure a major culture change again.

Leaders will be the ones who create and sustain that change. So, AI’s not going to do that. They’ll do it to some extent, but leaders will be the one to sustain it. I’ll be curious to see how we value leaders moving forward, and what we value. But I would say, if you are a leader or a manager, think about how you balance your technical skills and your soft skills. I don’t love that phrase by the way, soft skills, but what I mean are your human-centered skillsets, so to speak.

I’ll share one more thing that, Ren, if you haven’t seen this, you need to as well. Over the weekend, Roger’s son, his youngest son, showed us a video of Kendrick Lamar, who’s a musician. He has a series of, they’re called deepfake videos, where his face morphs, his face morphs into different people. It is so seamless, it’s incredible. It begs that question again, Wow, that’s really beautiful artistry. On the flip side, Where could this go wrong? Again, we’re not necessarily talking about deepfake here, but we are talking about AI, which can do some really tremendous things. So, I do think that understanding the benefits and starting to get curious around where some of the hiccups might occur, will be the best thing that you can do as a leader.

Ren:

Yeah. I think maybe the biggest light bulb for me that just turned on, Allison, as you were talking, is the, I think, core concept behind polarities. Of course, big shout out to our Polarity Partners. What a great concept. They always say, maybe not everything’s a problem to solve. You remind me that this is absolutely a challenge to manage. There’s no fix coming for, hey, are we going to cross the finish line for leadership in AI? I don’t know. I think it’ll be like breathing. It’ll be a little bit of this, a little bit of that, and the ebb and flow that goes on in between. Then who knows? Maybe I’ll have it write a new intro for me for the deepfake episode.

Allison:

Yeah, I hope so. I hope so. To your point, if you are a leader, think about what would happen if you err too heavily on the technological side. You will naturally need to pull in some of that human-centered behavior, if you overfocus or overemphasize on AI and technology and vice versa. You can’t overemphasize on the human component because the world is moving towards AI. So, you need to have both, like Ren just beautifully mentioned.

So, with that said, Ren, what’s one piece of advice you would give somebody today right now, knowing that today is February 13th? Again, things could change by the time we hang up from recording this.

Ren:

Don’t fear the artificial intelligence. Do what you can to embrace it. Maybe have ChatGPT be one of your friends that you run your ideas by. I think that’d be a really interesting exercise. You’re like, Allison, I got this problem. What do you think? Then you can still spill the tea with me, and then I could hit up ChatGPT, and then I could be like my cold robotic friend, who doesn’t really have tea to share, but has the internet at its fingertips. So, just embrace it, maybe add it into your rotation, but don’t overindex on it. I think that’s what someone could do today.

Allison:

Great. I would say just be adaptable. To Ren’s point, learn as much as you can about it, so you’re able to speak to it and understand what it does do, and what it’s not going to do, for your business. Who knows what will come up in our conversation the next time we record?

Ren:

I know.

Allison:

I look forward to it. As always, you can find our show notes and links to all of our podcasts at ccl.org. As always, a big thank you to Emily and Ryan who work behind the scenes to make our podcast happen.

Ren:

Not robots, they’re people.

Allison:

Exactly. They’re humans. Ren, a great conversation as always.

Ren:

Yeah, thanks.

Allison:

Don’t forget to connect with us on LinkedIn, folks. Let us know what you’d like us to talk about next, and we’ll look forward to tuning in next time.

Ren:

That’s right. See you next time, folks.

Allison:

Thanks everyone.

Ren:

Follow us on TikTok.

| Related Solutions

Sign Up for Newsletters

Don’t miss a single insight! Get our latest cutting-edge, research-based leadership content sent directly to your inbox.

Related Content