Ethical decision-making

I’d like to talk today about bias and ethical decision-making. But remember the last talk I gave about ethical decision tools? We went through each tool. And then at the end of each one, there was a little thing that said, bias may affect decision-making.

In other words, a weakness of every ethical perspective is the possibility that you let your decisions be affected by bias. Then I said, hey, we’re going to come back to this and really get into it in-depth. And so that’s what we’re doing now.

OK, let’s start with the learning objectives; recognize sources of decision bias, and, of course, use this recognition to make unbiased ethical decisions. All right, so, here’s the title of the talk. But I’ve got a subtitle for you, Why Good People Do Bad Things.

You see this all the time. Friends, you see it on the news. You see it, perhaps, in the organization you’re working in. You see people do really bad things– maybe you call them dumb things– but they’re not bad people. And you’re going, why in the world did they do that?

In fact, you don’t even believe it sometimes when somebody tells you that so-and-so did that one thing that was in the news or that you see in the company report. You go, wait, that’s just not the person I know. So, we really need to delve into this and figure out answers to why good people do bad things.

And it all begins with sources of decision bias. And I’m assuming you’ve read the article by Anand et al on rationalizing corruption. And in that article, especially the first half of it, they talk about six specific basis, personal basis, for ways that an individual can rationalize doing something wrong or corruption. And I’d like to go through them really quickly. And then we’ll come back to them by applying them to some specific stuff we’ve already done in the course.

The first one, and maybe most important one, denial of responsibility. Hey, it’s not my fault. I was told to do it, or I had to do it, or I had no choice to do it. Hey, it’s not my problem.

Denial of injury. Yeah, that person really wasn’t hurt that bad. It wasn’t that big a deal. Come on, you’re making a bigger deal out of it than you really ought to.

Denial of victim. Yeah, OK, I know something happened here. But they had their own way of looking at things. They were part of this whole thing.

[MUSIC PLAYING]

looking at things. They were part of this whole thing.

They’re at fault here, as well as me. They’re not really the victim, OK? There’s no victim here. It just happened.

Social weighting. OK, yeah, it was kind of bad. But, look, I’ve seen much worse. This isn’t that big deal in the scheme of things. Lots of people do things a lot worse than what we’re talking about right now.

Appeal to higher loyalties. OK, right, but my job is to get that product out the door, OK? If I don’t get that product out the door, the organization’s got a real problem itself. And we’ve got to do something about it. So, the higher loyalty, in this case, is the organization. Perhaps it’s profit, its quarterly numbers, whatever the issue might be.

And then, lastly, metaphor of the ledger. The idea that we all sort of keep this mental balance sheet in our head of the good things we do and the not so good things we do. And when we build up enough credits there– we’ve done a lot of good things– we kind of feel like we owe ourselves to do a bad thing once in awhile.

It’s kind of like not that big a deal. I’ve really been good lately. This one bad thing isn’t such a big deal. Those are the six ways Anand et al talk about rationalizing corruption. Let’s apply it.

First of all, to the Basic Instincts video. I assume you’ve watch that also. If you haven’t, stop this right now and make sure you watch the video. It had a couple of different parts. I’d like to first talk about the obedience experiment. That was most of what the video is about.

As you probably know based on what they said in the video, it’s a re-creation of the old Milgram experiments the 1960s. Now think about what happened there, OK? You have an experiment with a teacher and a learner.

The teacher’s reading these word pairs, and the learner is supposed to remember them. And if the learner gets it right nothing happens. If the learner gets it wrong, he gets a shock.

And the teacher gives them the shock by pushing a button. And the shocks get worse and worse and worse. And what, of course, the video showed was that people will shock to the death, basically, if being told to do so, not everybody, about 50%.

OK, now, so what was going on there in the Anand et al context? Well, first of all, obviously denial of responsibility, that was the big one. Remember in the video, people we’re turning

around to the experimenter going, now wait a minute, you’re taking responsibility for this, right?

And the experimenter would go, yes. This is my responsibility. Anything that happens is my responsibility.

Remember that– the one that struck me, the seventh grade teacher turns around, and she goes, now you assume full responsibility for all this? And he goes, yes, I assume full responsibility. And she goes, that’s what I wanted to know. Of course, it was a little bit chilling. She reminded me of my seventh grade teacher.

But anyway, so, denial of responsibility was a big thing in that whole experiment. Now, you might have asked yourself, well, why did Milgram do this study in the first place? And why this is such a big deal?

Why did, whatever that was, ABC News do a re-creation of this whole thing? And, in fact, the BBC, British Broadcasting Company, did a similar thing, too. Why is this such a big deal?

Well, let’s go back in the day a little bit. Well, let’s go back to World War II, and prior to World War II, the holocaust, and the Nuremberg Trials that followed the holocaust. The typical defense at the Nuremberg Trial when they would put a commandant on the witness stand, when they would put a prison guard on there, other workers in the concentration camps and so forth, what was their answer? Their answer what well, I was just following orders. I had no choice. They denied responsibility for it.

Well, you know, that didn’t fly in the Nuremberg trials as an excuse. And it shouldn’t fly anywhere else either, obviously. And so what Milgram was trying to do is figure out, OK, were these just all evil people? Was Germany all of a sudden filled with evil people?

Or could they’re just be, perhaps, a few evil people– obviously, Hitler and some others– but the rest of the people we’re not necessarily inherently evil. They’re good people who did really, really bad things. And so that’s what he was doing these experiments for. He was basically trying to see if good people, average people, would do really horrible things, which is potentially shock the person to death, OK?

And what, again, Milgram found was that about 50% of the subjects will literally go to the end of that shockboard. The ABC News thing only went to 150 volts. But in the initial experiment, it went all the way to 450 volts.

So, lots was going on there, but especially denial of responsibility. I think we saw denial of injury there to the extent that when asked, the experiment would go, yes, while the shocks are painful, they’re not harmful. So, now you’re the subject, you’re the teacher. You’ve got two sources of information coming to you.

One, you’re hearing the learner scream, ouch, my heart’s bothering me. I’ve got a heart problem. Let me out of here. And then you’re hearing the experimenter say, oh, while the shocks may be painful, but they’re not harmful. So, you’ve got two sources of information, which one do you choose?

Well, a lot of the subjects chose to hear only that the shocks aren’t harmful. They chose to ignore the screaming learner saying, get me out of here, I’ve got a heart problem. In the initial one, the Milgram one– Google that sometimes and watch it, they have it on YouTube or something like that– by the time they get to 300 volts, the learner is screaming bloody murder. I mean, he’s screaming.

If you heard that in your neighborhood, you’d run out the door to see what’s happening. You’d call 911 if you heard that screaming going on. Yet they kept going, 50% of the people kept going, kept going, kept going to the point where he even stops responding and they keep going after that on the orders of the experimenter. Scary stuff.

Denial of injury is part of it. Denial of victim? Yeah, I guess that could be there. Because as far as the teacher knew, if you recall the way they set things up, it was a random draw. The teacher could have easily been the learner. And then it’s just, well, the learner chose to participate in experiment. So, they’re not a victim.

And I think the other big one from Anand et al is appeal to higher loyalties. The way they phrased it was, oh, this is this great experiment on learning, this scientific endeavor. And the experimenter would say, it’s essential that you continue. The experiment requires that you continue.

So, what they’re trying to do there, the experimenter, is trying to tell the subject, you can’t stop now. There’s bigger things going on here that you have to be loyal to. And, of course, in the organizational context, profit, getting those numbers hit, those kinds of things, could be the higher loyalty that were appealing to here.

OK, so we’ve got the obedience experiment. Let’s talk about the McDonald’s case for a second. I’m sure you’re still shaking your head about this one. I’m still shaking my head about that one.

And, obviously, the McDonald’s case was a clear denial of responsibility again, too. Where, if you remember they interview the store manager, the McDonald’s manager, she’s going listen, there’s a police officer on the other line. You don’t know what you would have done. It’s not my responsibility. He was telling me what to do. So, the same basic idea.

But, actually, I want to go on a tangent for a second about the McDonald’s case. OK, it was a horrific thing you witnessed, right? But here’s the question I have for you. Was that thing they depicted, that incident they depicted, kind of a one in a million thing?

Whereas, if that caller would do this 100 more times, no a million more times, he wouldn’t get anybody else likely to do the same things; the manager to go along with it, the store employee to go along with it. Was it a one in a million thing? Or was it maybe not so unusual?

Think about it for a second. Make a prediction here. OK, well I’ll give you the answer. And if you actually want to stop this video for a second and Google, just go ahead and Google it– McDonald’s strip search or something like that– you’ll find the answer yourself.

And, in fact, this was not an isolated case. There’s, at least according to one source, at least 70 of these incidents over a 10-year period. 70, OK? A lot of them were almost as bad, or as bad as the one that you saw.

When I first saw that McDonald’s thing in the Basic Instinct video, I said, well, I don’t know why they included that. It’s like a one in a million or one in a billion thing, we really can’t learn much from that. But then when I googled it, and I realized that this isn’t that uncommon at all, it scared the death out of me. OK

Again, I’m sure you’re shaking your head like me. How could the manager have gone along with that? It was a phone call, a police officer on a phone call. He didn’t even show up in a uniform or anything like that.

How could they possibly have believed the police officer? Well, a lot of other store managers and things like that did too. It just shows we’re all vulnerable.

Now, you’re saying you’re not vulnerable. And I’m saying, of course, I’m not vulnerable. Well,

I’m almost going to side with the store manager for a second and say, you don’t know what you would have done with that phone call. And so that’s, I guess, going back to the learning objectives. Well, if you recognized it, almost anybody’s vulnerable to something and that’s the way to prevent anything really bad happening.

OK, applying a couple more places here. I’m also going to assume that you’ve watched the Parable of the Sadhu video case. OK, Parable of the Sadhu, you’re mountain climbing, right? And you’re– got this big objective you want to reach. You want to get through the pass, get to your destination.

It’s a big deal. The Sadhu gets dropped at your feet. Do you take the Sadhu to safety? Or do you just keep going along on your trip?

Buzz McCoy, the narrator of the film, the one who made it, he made the film. He was the one who initiated the whole thing. He did it, because he felt he made the wrong decision in retrospect. He felt that he should have taken the Sadhu to safety. And so that’s why he made the film. He also wrote an article in the Harvard Business Review about it.

And so, you might have asked yourself, what would I have done? And I guess, you could make a lot of arguments. But here’s the point for now, Buzz felt he made the wrong decision. So, why did he make the wrong decision?

Well, I guess he could have easily applied some rationalizations. Obviously, appeal to higher loyalties. I think we need to start with that. His trip was such a big deal to him.

And you know, I’d even like to add something to it a little bit here. He was a high-level executive at Morgan Stanley. You don’t get there without a really strong goal orientation. In other words, if you set a goal, you achieve it.

He set a goal to get to Muklinath, right? He was going to achieve it. That’s the way he succeeded in the world. And so, in that respect, it’s not all that surprising he didn’t take the Sadhu to safety.

Now you’re saying, OK, well, he’s a top executive at Morgan Stanley. What does that have to do with me? Well, look inside yourself for a second. Are you not goal-driven, as well? You might not be thinking along these terms. But think how far you’ve come already.

I’m guessing most of you are at least juniors in college. You’ve made it into a good college,

right? You’re taking challenging coursework. You’ve accomplished, maybe you don’t think about this sometimes. But you’ve accomplished an enormous amount in your life already.

I know you’ve got a ways to go, this course and some other things, too. But you you’ve accomplished an enormous amount. How did you do it? Think about that for a second. You did it in many cases by setting goals and achieving them. Goals are powerful motivators.

But here’s the point. Sometimes goals can cause you to so overly focus on that goal that you ignore the ethical problems that you’re causing in the pursuit of the goal. I like to call it goal- blindedness. If that’s a term you like, you can use that one. Another term is motivated- blindness.

and I think that’s what Buzz McCoy had. He had goal-blindedness. He was blinded so much by his goal, he didn’t see the bigger picture, the injury to the Sadhu. And that caused him, in a sense, to appeal to higher loyalty and deny responsibility, too.

I think what he said was, toward the end remember, he goes, look, everybody did his bit. We carried the Sadhu to the next group. And I don’t know what happened after that, but whatever.

And remember the response, though, of Stephen, the anthropologist? He goes, yeah, right. But what would you have done if that were, say, a Western woman in that snow bank? Then you deny responsibility? Would then you appeal to a higher loyalty?

So, yeah, cultural closeness, or how close you are in your life to the person that you’re helping really shouldn’t matter. A human’s a human, right? So, that’s the Parable of the Sadhu and some of the Anand et al rationalizations applied to that.

Let’s do, really quickly, End Game. So, Julie is faced with this fork in the road, so to speak, right? And she can turn left and go to the town, report the accident. Or she can turn right, and go home, and not report the accident. She turns right and goes home.

And then afterwards, we find out in the second part some other information about the whole scenario and everything. But let’s think about it in Anand et al terms. Well, hm, appeal to higher loyalties just jumps right out again.

In this case, the higher loyalty was her foundation, the kids, right? Now, some of you, when you thought about this, might feel that she did the right thing by turning right. And if you did, you’re probably applying a very strong utilitarian mindset saying, well, the ends justify the

means.

And I’m not going to question that. I’m not going to challenge that. If that’s the way you weigh things out in your utilitarian analysis, OK. That’s not for me to challenge.

But I’m guessing most of you felt that turning left was the right thing to do. That means she made, in your view, an unethical decision. So, why? Appeal to higher loyalties.

And I would even add– and I know I’m speculating here– but maybe metaphor of the ledger. Think about it, she’s been such a giver in her life. She’s been such a do-gooder. She’s done so much good.

Maybe she felt she was owed this one. Maybe she felt that, look, I can’t let this one little thing blemish all the good I’ve done it. And it’s not fair to me to let this one little thing blemish all the good I’ve done.

There’s more to Anand than that first half or so where it has the six rationalizations for corruption. I’m not going to go through the rest of it in a lot of detail. But don’t forget the organizational focus of the second half of the article.

The first half gets with personal things. Remember we’ve talked before in moral awareness about situational and organizational factors. And that’s what the second half of the Anand et al article focuses on. So, don’t forget about that part when applying it.

And also don’t forget about the prevention tips at the very end of the article. They’re very good. They’re very specific. I like them.

But I think, big picture, the real way to prevent it is to simply recognize the fact that all of us could be subject to decision bias. All of us, all of us good people could do bad things. Of course, that takes us finally back to the learning objectives.

Get your Custom paper done as per your instructions !

Order Now