WEBVTT
1
00:00:02.470 --> 00:00:03.330
How can governance
2
00:00:03.330 --> 00:00:06.023
of artificial intelligence help organizations?
3
00:00:07.080 --> 00:00:08.870
The word governance can come with a lot of baggage
4
00:00:08.870 --> 00:00:10.940
and some negative connotations,
5
00:00:10.940 --> 00:00:13.860
but governance can enable organizations too.
6
00:00:13.860 --> 00:00:14.860
The question is how?
7
00:00:15.720 --> 00:00:17.210
We'll close out the season with a discussion
8
00:00:17.210 --> 00:00:19.100
with Kay Firth-Butterfield.
9
00:00:19.100 --> 00:00:20.650
She's the head of artificial intelligence
10
00:00:20.650 --> 00:00:23.100
and machine learning for the executive committee
11
00:00:23.100 --> 00:00:24.500
of the World Economic Forum.
12
00:00:25.340 --> 00:00:28.050
With Kay, we'll learn not only about her specific background
13
00:00:28.050 --> 00:00:30.750
in the legal profession, but she'll also help us think
14
00:00:30.750 --> 00:00:32.800
about what we've learned overall this season.
15
00:00:34.000 --> 00:00:35.800
Welcome to "Me, Myself, and AI,"
16
00:00:35.800 --> 00:00:38.810
a podcast on artificial intelligence and business.
17
00:00:38.810 --> 00:00:41.970
Each week, we introduce you to someone innovating with AI.
18
00:00:41.970 --> 00:00:44.700
I'm Sam Ransbotham, professor of information systems
19
00:00:44.700 --> 00:00:46.220
at Boston College.
20
00:00:46.220 --> 00:00:48.080
I'm also the guest editor for the AI
21
00:00:48.080 --> 00:00:50.350
and Business Strategy Big Idea Program
22
00:00:50.350 --> 00:00:52.530
at MIT Sloan Management Review.
23
00:00:52.530 --> 00:00:56.100
And I'm Shervin Khodabandeh, senior partner with BCG,
24
00:00:56.100 --> 00:00:59.910
and I co-lead BCG's AI practice in North America.
25
00:00:59.910 --> 00:01:04.910
And together, BCG and MIT SMR have been researching AI
26
00:01:04.920 --> 00:01:08.620
for four years, interviewing hundreds of practitioners,
27
00:01:08.620 --> 00:01:12.390
and surveying thousands of companies on what it takes
28
00:01:12.390 --> 00:01:16.570
to build and deploy and scale AI capabilities
29
00:01:16.570 --> 00:01:19.533
and really transform the way organizations operate.
30
00:01:21.140 --> 00:01:24.380
Well first, Kay, let me just officially say we're thrilled
31
00:01:24.380 --> 00:01:25.550
to have you talk with us today.
32
00:01:25.550 --> 00:01:27.510
Thanks for taking the time, welcome.
33
00:01:27.510 --> 00:01:28.560
Thank you.
34
00:01:28.560 --> 00:01:31.660
Of course, so Kay you've got a fascinating job or actually
35
00:01:31.660 --> 00:01:36.360
really jobs that's you've got so many things going on.
36
00:01:36.360 --> 00:01:38.500
So for our listeners, can you introduce yourself
37
00:01:38.500 --> 00:01:40.830
and describe your current roles?
38
00:01:40.830 --> 00:01:41.950
Yes, certainly.
39
00:01:41.950 --> 00:01:45.610
I'm Kay Firth-Butterfield, and I am head of AI
40
00:01:45.610 --> 00:01:48.970
and machine learning at the World Economic Forum.
41
00:01:48.970 --> 00:01:53.660
So what that essentially means is that we work
42
00:01:53.660 --> 00:01:58.090
with multi-stakeholders, so companies, academics,
43
00:01:58.090 --> 00:02:02.310
governments, international organizations, and civil society
44
00:02:02.310 --> 00:02:07.310
to really think through the governance of AI.
45
00:02:07.640 --> 00:02:12.000
So when I say governance, I say it very much with a small g,
46
00:02:12.000 --> 00:02:14.390
we're thinking about everything from norms
47
00:02:14.390 --> 00:02:19.390
through to regulation but AI, we feel, is less susceptible
48
00:02:20.040 --> 00:02:21.840
to regulation.
49
00:02:21.840 --> 00:02:24.260
Can you tell us how you got there?
50
00:02:24.260 --> 00:02:26.510
Give us a little bit of background about your career to date
51
00:02:26.510 --> 00:02:28.890
and how did you end up in this role?
52
00:02:28.890 --> 00:02:32.280
I am by background a human rights lawyer.
53
00:02:32.280 --> 00:02:36.090
I am a barrister, that's the type of trial lawyer
54
00:02:36.090 --> 00:02:37.720
that wears the wig and gown.
55
00:02:37.720 --> 00:02:42.720
And I got to a point in my career where I was being
56
00:02:43.410 --> 00:02:45.790
considered for a judicial appointment.
57
00:02:45.790 --> 00:02:49.140
In the UK they kindly sort of try out whether
58
00:02:49.140 --> 00:02:52.040
you want to be a judge and whether they think you're really
59
00:02:52.040 --> 00:02:53.100
good at it.
60
00:02:53.100 --> 00:02:56.680
I don't know what their view was, but my view was that
61
00:02:56.680 --> 00:03:00.450
it wasn't the culmination of a career in the law
62
00:03:00.450 --> 00:03:02.560
that I really wanted.
63
00:03:02.560 --> 00:03:07.560
And I had been very interested in the impact of technology
64
00:03:07.810 --> 00:03:10.150
on humans and human rights,
65
00:03:10.150 --> 00:03:14.830
and so it gave me this wonderful opportunity to rethink
66
00:03:14.830 --> 00:03:16.850
where my career would go.
67
00:03:16.850 --> 00:03:20.930
So I was fortunate to be able to come to Austin and teach
68
00:03:20.930 --> 00:03:25.930
AI, law, international relations to pursue my own studies
69
00:03:26.280 --> 00:03:31.280
around law and AI and international relations and AI
70
00:03:31.620 --> 00:03:36.380
and the geopolitical implications of this developing
71
00:03:36.380 --> 00:03:37.790
technology.
72
00:03:37.790 --> 00:03:42.790
And then purely by luck, met a person on a plane
73
00:03:44.610 --> 00:03:47.810
from Heathrow to Austin, it's 10 hours.
74
00:03:47.810 --> 00:03:52.810
He was this chair and CEO of an AI company who was thinking
75
00:03:53.140 --> 00:03:54.820
about AI ethics.
76
00:03:54.820 --> 00:03:59.820
And this was back in 2014 when hardly anybody apart from me
77
00:03:59.930 --> 00:04:03.680
and the dog and some other people were thinking about it.
78
00:04:03.680 --> 00:04:08.680
And so he asked me as we got off the plane if I would like
79
00:04:09.440 --> 00:04:14.160
to be his chief AI ethics officer.
80
00:04:14.160 --> 00:04:17.820
And so that's really how I moved into AI, but obviously
81
00:04:17.820 --> 00:04:22.820
with the social justice, with the ideas of what benefits
82
00:04:24.050 --> 00:04:29.050
AI can bring to society and also cognizant of what we might
83
00:04:29.860 --> 00:04:31.900
have to be worrying about.
84
00:04:31.900 --> 00:04:36.370
And so I have been vice chair of the IEEE's initiative
85
00:04:36.370 --> 00:04:40.610
on ethically aligned design since 2015.
86
00:04:40.610 --> 00:04:44.630
I was part of the Asilomar Conference thinking about
87
00:04:44.630 --> 00:04:49.630
ethical principles for AI back again in 2015.
88
00:04:49.630 --> 00:04:53.730
And so my career ended up with me taking this job
89
00:04:53.730 --> 00:04:57.010
at the forum in 2017.
90
00:04:57.010 --> 00:05:00.400
I say ended up but maybe not, who knows?
91
00:05:00.400 --> 00:05:02.560
Yeah, we won't call it an end just yet.
92
00:05:02.560 --> 00:05:05.470
So what does artificial intelligence mean?
93
00:05:05.470 --> 00:05:09.500
Well, part of the problem and part of its complexity
94
00:05:09.500 --> 00:05:12.800
is that AI means different things to different people.
95
00:05:12.800 --> 00:05:17.800
So AI means one thing to an engineer and another thing
96
00:05:17.870 --> 00:05:21.510
to a person who's using it as a member of the public
97
00:05:21.510 --> 00:05:22.830
or through their phone.
98
00:05:22.830 --> 00:05:25.700
So we're shifting our definition as we go
99
00:05:25.700 --> 00:05:28.220
and we'll continue to as well.
100
00:05:28.220 --> 00:05:30.970
Yeah, there's that old adage that it's not in artificial
101
00:05:30.970 --> 00:05:33.600
intelligence once it's done.
102
00:05:33.600 --> 00:05:37.350
And how much of that do you think is education
103
00:05:37.350 --> 00:05:41.710
and is sort of stemming from lack of understanding and lack
104
00:05:41.710 --> 00:05:46.710
of education versus a technical or process complexity
105
00:05:49.660 --> 00:05:53.160
inherent in putting all that governance in place, right?
106
00:05:53.160 --> 00:05:57.200
I mean, I guess part of it is you can't really manage
107
00:05:57.200 --> 00:06:01.410
or govern that which you don't really quite understand.
108
00:06:01.410 --> 00:06:06.410
Is that most of the battle and once everybody understands it
109
00:06:07.020 --> 00:06:10.120
because it's common sense, then they begin to say well,
110
00:06:10.120 --> 00:06:13.010
now how we could the govern this like anything else we would
111
00:06:13.010 --> 00:06:15.220
govern because now we understand it?
112
00:06:15.220 --> 00:06:18.400
Yes, well, I think that it's organizational change,
113
00:06:18.400 --> 00:06:22.930
it's education and training for employees, but it's also
114
00:06:22.930 --> 00:06:26.320
thinking very carefully about product design so that
115
00:06:26.320 --> 00:06:29.870
if you are actually developing algorithmic product,
116
00:06:29.870 --> 00:06:33.610
what's the path of that from the moment that you dream up
117
00:06:33.610 --> 00:06:37.460
the idea to the moment that you release it, either to other
118
00:06:37.460 --> 00:06:42.460
businesses or into customers and maybe even beyond that.
119
00:06:42.960 --> 00:06:45.310
I couldn't help but pick up on one of the things you said
120
00:06:45.310 --> 00:06:46.910
about governance as being negative.
121
00:06:46.910 --> 00:06:49.180
But one of our studies a few years ago found that healthcare
122
00:06:49.180 --> 00:06:51.990
shared data more than other industries.
123
00:06:51.990 --> 00:06:54.970
And that seems counterintuitive, but when we dug into it
124
00:06:54.970 --> 00:06:57.810
what we found is they knew what they could share.
125
00:06:57.810 --> 00:07:00.760
They had structure about it and so that structure then
126
00:07:00.760 --> 00:07:03.560
enabled them to know what they could do,
127
00:07:03.560 --> 00:07:05.090
know what they couldn't do.
128
00:07:05.090 --> 00:07:07.730
Whereas other places when they talked about data sharing,
129
00:07:07.730 --> 00:07:12.440
they were well, let's have to check with our compliance
130
00:07:12.440 --> 00:07:14.820
department and have to check and see what we can do.
131
00:07:14.820 --> 00:07:17.903
And you know, there's much less checking because it's
132
00:07:17.903 --> 00:07:19.660
explicit and the more explicit we can be.
133
00:07:19.660 --> 00:07:22.560
And that's an enabling factor of governance versus
134
00:07:22.560 --> 00:07:25.320
this sort of oppressive factor of governance.
135
00:07:25.320 --> 00:07:27.264
Yes, I think just governance has got itself a bad name
136
00:07:27.264 --> 00:07:31.390
because, you know, regulation impedes innovation,
137
00:07:31.390 --> 00:07:34.010
and that's not necessarily so.
138
00:07:34.010 --> 00:07:35.930
I think that at the moment, we're exploring
139
00:07:35.930 --> 00:07:39.983
all these different soft governance ideas.
140
00:07:41.140 --> 00:07:44.910
Largely because to begin with, yes, we will probably see
141
00:07:44.910 --> 00:07:49.750
regulation, the EU said we will see regulation out of Europe
142
00:07:49.750 --> 00:07:52.470
around things like facial recognition and use of AI
143
00:07:52.470 --> 00:07:56.090
and human resources because they're classified as high risk
144
00:07:56.090 --> 00:08:00.520
cases, but a lot are not necessarily high risk cases.
145
00:08:00.520 --> 00:08:04.830
What they are are things that businesses want to use,
146
00:08:04.830 --> 00:08:07.000
but they want to use wisely.
147
00:08:07.000 --> 00:08:10.360
So what we have done as well as create a lot of toolkits
148
00:08:10.360 --> 00:08:14.390
for example, and guidelines and workbooks.
149
00:08:14.390 --> 00:08:18.230
Say that companies or governments can say, "Oh yes,
150
00:08:18.230 --> 00:08:21.820
this can guide me through this process" of, for example,
151
00:08:21.820 --> 00:08:24.530
procurement of artificial intelligence.
152
00:08:24.530 --> 00:08:27.680
Just to give you an example, we surveyed a number
153
00:08:27.680 --> 00:08:30.760
of our members of boards on their understanding
154
00:08:30.760 --> 00:08:32.610
of artificial intelligence.
155
00:08:32.610 --> 00:08:35.590
They didn't really understand artificial intelligence
156
00:08:35.590 --> 00:08:36.780
terribly well.
157
00:08:36.780 --> 00:08:41.470
And so what we did was developed an online tool for them
158
00:08:41.470 --> 00:08:45.313
to understand artificial intelligence but also then to say,
159
00:08:47.160 --> 00:08:49.850
"Okay, my company is going to be deploying artificial
160
00:08:49.850 --> 00:08:53.690
intelligence, what are my oversight responsibilities?"
161
00:08:53.690 --> 00:08:57.460
and long questionnaires of things that you might want to ask
162
00:08:57.460 --> 00:09:00.160
your board if you're on the audit committee or the risk
163
00:09:00.160 --> 00:09:02.660
committee or you're thinking about strategy.
164
00:09:02.660 --> 00:09:06.010
So really digging into the way that boards should be
165
00:09:06.010 --> 00:09:10.790
thinking across the enterprise about the deployment of AI.
166
00:09:10.790 --> 00:09:13.670
Yeah because I'm guessing most people need that guidance.
167
00:09:13.670 --> 00:09:16.100
Yeah, most people for sure need that guidance,
168
00:09:16.100 --> 00:09:21.100
and I think this is a very well-placed point you're making.
169
00:09:21.160 --> 00:09:26.160
What we don't want to happen is to be so far behind
170
00:09:27.450 --> 00:09:32.450
in understanding and education and governance
171
00:09:32.830 --> 00:09:37.260
of any technology, where then it becomes such a black box
172
00:09:37.260 --> 00:09:41.480
that it's a huge activation energy for anybody to get there.
173
00:09:41.480 --> 00:09:45.810
And we heard that also from Slawek Kierner from Humana,
174
00:09:45.810 --> 00:09:50.810
we heard that from Arti at H&M, was the importance of really
175
00:09:51.630 --> 00:09:55.980
big cross-organizational training, not just for the board
176
00:09:55.980 --> 00:10:00.490
and not just for the handful, but for everybody almost like
177
00:10:00.490 --> 00:10:03.650
you know, I think we heard from Porsche that they actually
178
00:10:03.650 --> 00:10:07.010
did training for their entire technology organization.
179
00:10:07.010 --> 00:10:09.470
This is AI, this is what it could do right,
180
00:10:09.470 --> 00:10:11.880
this is what it could do wrong, it's what you need to learn.
181
00:10:11.880 --> 00:10:16.070
And by the way, this is how it can give you all these
182
00:10:16.070 --> 00:10:21.070
new designs that you as a engineer or a designer can explore
183
00:10:21.950 --> 00:10:25.210
to design the next-generation model.
184
00:10:25.210 --> 00:10:27.080
And this is how it could be your friend.
185
00:10:27.080 --> 00:10:31.490
But I think you're pointing out that it's time for us
186
00:10:31.490 --> 00:10:36.070
to really internalize all of these as not nice to haves
187
00:10:36.070 --> 00:10:39.410
but critical even I would say almost first step
188
00:10:39.410 --> 00:10:41.428
before getting too far ahead.
189
00:10:41.428 --> 00:10:42.960
Yes, absolutely.
190
00:10:42.960 --> 00:10:46.970
And in fact, there's a company in Finland that requires
191
00:10:46.970 --> 00:10:50.610
everybody to learn something about AI even at the very most
192
00:10:50.610 --> 00:10:54.380
basic level, and they have a course for their employees
193
00:10:54.380 --> 00:10:57.640
which is important.
194
00:10:57.640 --> 00:11:02.330
Obviously not everybody can master the math, but you don't
195
00:11:02.330 --> 00:11:04.646
even have to go that far.
196
00:11:04.646 --> 00:11:07.700
Or should, I can't help but build off of your
197
00:11:07.700 --> 00:11:09.260
human rights background.
198
00:11:09.260 --> 00:11:12.470
One of the things that strikes me is there's incredible
199
00:11:12.470 --> 00:11:15.650
advances with artificial intelligence use by organizations,
200
00:11:15.650 --> 00:11:18.450
particularly large organizations, particularly well-funded
201
00:11:18.450 --> 00:11:20.150
large organizations.
202
00:11:20.150 --> 00:11:22.860
How do we as individuals stand a chance here?
203
00:11:22.860 --> 00:11:26.540
Do we each need our own individual AI working for us?
204
00:11:26.540 --> 00:11:29.080
How can we empower people to work in this perhaps
205
00:11:29.080 --> 00:11:31.520
lopsided arrangement?
206
00:11:31.520 --> 00:11:35.680
Yes, I think the imbalances of power is something that
207
00:11:35.680 --> 00:11:40.680
we have to address as both individuals and as companies.
208
00:11:40.850 --> 00:11:43.780
You know there are some companies with more AI capabilities
209
00:11:43.780 --> 00:11:48.780
than others as non-profits and also as a world because
210
00:11:48.880 --> 00:11:53.880
at the moment the concentration of AI, talent, skills,
211
00:11:54.800 --> 00:11:57.870
and jobs is very skewed around the world.
212
00:11:57.870 --> 00:12:02.210
And we really have to think globally about how AI
213
00:12:02.210 --> 00:12:05.380
is deployed on behalf of humans.
214
00:12:05.380 --> 00:12:10.380
And what makes us human and where we want to be maybe
215
00:12:12.200 --> 00:12:16.580
in 15 or 20 years when AI can do a lot of the things
216
00:12:16.580 --> 00:12:20.610
that we are doing currently.
217
00:12:20.610 --> 00:12:25.610
So I think that it's systemic and structural conversations
218
00:12:25.660 --> 00:12:30.660
that we have to have in all those different layers as well.
219
00:12:30.800 --> 00:12:33.110
Right, the systemic and structural issues are big
220
00:12:33.110 --> 00:12:35.920
because, I have to say, I don't think most companies intend
221
00:12:35.920 --> 00:12:38.710
to start AI with an evil bent.
222
00:12:38.710 --> 00:12:41.300
I mean, they're not cackling and rubbing their hands
223
00:12:41.300 --> 00:12:43.900
together and applauding.
224
00:12:43.900 --> 00:12:45.530
I think these things are more insidious
225
00:12:45.530 --> 00:12:46.780
and systemic than that.
226
00:12:46.780 --> 00:12:49.410
So how do we do that?
227
00:12:49.410 --> 00:12:53.090
In my experience of working with a lot of companies,
228
00:12:53.090 --> 00:12:58.090
governments, et cetera, I would say you're absolutely right.
229
00:12:58.800 --> 00:13:02.970
Companies want to go in doing the right thing,
230
00:13:02.970 --> 00:13:06.500
and what we need to be doing is making sure
231
00:13:06.500 --> 00:13:09.470
that we help them do the right thing.
232
00:13:09.470 --> 00:13:13.890
And it's very much that perhaps a lack of understanding
233
00:13:13.890 --> 00:13:18.450
of the technology is going to skew how they use it.
234
00:13:18.450 --> 00:13:22.700
And so those are all areas that we have been trying to focus
235
00:13:22.700 --> 00:13:26.690
on at the forum so that people who go into using AI with
236
00:13:26.690 --> 00:13:31.450
the right mindset actually come out with the right results.
237
00:13:31.450 --> 00:13:36.450
And you know, your company is a little piece of society.
238
00:13:37.390 --> 00:13:40.820
The idea should be that everybody works together because
239
00:13:40.820 --> 00:13:43.250
you're actually going to end up with a better product.
240
00:13:43.250 --> 00:13:46.550
And I think to your point, the better we enable our
241
00:13:46.550 --> 00:13:51.550
customers or the general public to understand AI,
242
00:13:52.670 --> 00:13:56.310
the less scary it will be.
243
00:13:56.310 --> 00:13:59.030
I also fear that there are many companies
244
00:13:59.030 --> 00:14:02.010
that are being told to go out and get AI.
245
00:14:02.010 --> 00:14:05.160
And they actually don't know what it is that they're getting
246
00:14:05.160 --> 00:14:08.660
or really what the benefit is going to be
247
00:14:08.660 --> 00:14:10.530
or what the downsides might be.
248
00:14:10.530 --> 00:14:12.930
So having the board being capable of asking
249
00:14:12.930 --> 00:14:15.960
the right question is absolutely crucial but, you know,
250
00:14:15.960 --> 00:14:20.800
we're currently working on a similar toolkit for different
251
00:14:20.800 --> 00:14:25.800
types of C-suite officer so that they too can be empowered
252
00:14:26.320 --> 00:14:27.680
to understand more.
253
00:14:27.680 --> 00:14:32.680
But I also see the need for thinking carefully about AI
254
00:14:34.320 --> 00:14:37.110
as a top down and bottom up.
255
00:14:37.110 --> 00:14:40.480
You know, that's why I go back to that survey that you did,
256
00:14:40.480 --> 00:14:44.980
where understanding across the organization
257
00:14:44.980 --> 00:14:47.000
is actually so important.
258
00:14:47.000 --> 00:14:50.430
And I think where you're seeing some of the developments
259
00:14:50.430 --> 00:14:54.490
amongst the companies that have been dealing with this like,
260
00:14:54.490 --> 00:14:58.190
Microsoft, they went for an ether committee.
261
00:14:58.190 --> 00:15:02.230
They went for really sort of thinking about strategically
262
00:15:02.230 --> 00:15:04.280
how are we using AI.
263
00:15:04.280 --> 00:15:09.280
And so I think that we have the benefits of what they learnt
264
00:15:09.940 --> 00:15:14.570
early on that we can then begin to bring into the sector,
265
00:15:14.570 --> 00:15:17.080
from board to designer.
266
00:15:17.080 --> 00:15:19.330
And the good part about that is that education component
267
00:15:19.330 --> 00:15:23.380
keeps it from just being ethics theater, kind of the thin
268
00:15:23.380 --> 00:15:26.760
veneer to put the stamp on it and check the box that yes,
269
00:15:26.760 --> 00:15:29.110
we've done the ethics thing.
270
00:15:29.110 --> 00:15:32.170
But I guess what's the role for business in trying to
271
00:15:32.170 --> 00:15:36.820
educate people to have a better human-machine collaboration?
272
00:15:36.820 --> 00:15:39.020
Obviously we've heard a lot about the potential for AI
273
00:15:39.020 --> 00:15:42.630
to affect workplace and job security, but people are already
274
00:15:42.630 --> 00:15:44.170
incredibly busy at work.
275
00:15:44.170 --> 00:15:47.090
What potential is there for AI to kind of free us from
276
00:15:47.090 --> 00:15:50.840
some of these mundane things and lead to greater innovation?
277
00:15:50.840 --> 00:15:54.040
When we talk with Gina Chung at DHL, she's in the innovation
278
00:15:54.040 --> 00:15:57.610
department and that's where they're focusing AI efforts.
279
00:15:57.610 --> 00:16:00.460
Is this a pipe dream or is there a potential here?
280
00:16:00.460 --> 00:16:03.250
No, I think that it's certainly not a pipe dream
281
00:16:03.250 --> 00:16:07.520
and most people have innovation labs both in the companies
282
00:16:07.520 --> 00:16:11.370
and countries and UNICEF has an innovation lab,
283
00:16:11.370 --> 00:16:14.650
we were talking about children and AI.
284
00:16:14.650 --> 00:16:19.650
So the potential for AI to free us from some of the things
285
00:16:20.780 --> 00:16:25.780
that we see as mundane, the potential for it to help us
286
00:16:28.430 --> 00:16:33.430
to discover new drugs, to work on climate change.
287
00:16:33.640 --> 00:16:38.640
There are all the reason that I stay working in this space,
288
00:16:39.230 --> 00:16:42.700
and you might say, "Well, you work on governance,
289
00:16:42.700 --> 00:16:45.967
doesn't that mean that you just see AI as a bad thing?"
290
00:16:45.967 --> 00:16:48.290
And that's not true.
291
00:16:48.290 --> 00:16:53.290
Just as an example, at the moment we have problems
292
00:16:54.230 --> 00:16:58.280
just using Zoom for education because there are many kids
293
00:16:58.280 --> 00:17:01.020
who don't have access to broadband.
294
00:17:01.020 --> 00:17:05.620
So that brings us against the questions of rural poverty
295
00:17:05.620 --> 00:17:10.620
and the fact that many people move from rural communities
296
00:17:10.850 --> 00:17:14.770
to cities and yet, if we look at the pandemic,
297
00:17:14.770 --> 00:17:17.850
cities tend to be bad for human beings.
298
00:17:17.850 --> 00:17:21.736
So all the conversations that we should be having
299
00:17:21.736 --> 00:17:25.530
I'm thinking about the innovations that AI will create,
300
00:17:25.530 --> 00:17:29.810
which allow that sort of cross-function of rural
301
00:17:29.810 --> 00:17:32.173
to be as wealthy as city.
302
00:17:33.150 --> 00:17:37.380
We should be having really deep structural conversations
303
00:17:37.380 --> 00:17:39.490
about what our future looks like.
304
00:17:39.490 --> 00:17:41.990
Does it look like "Blade Runner" cities?
305
00:17:41.990 --> 00:17:44.050
Or does it look like something else?
306
00:17:44.930 --> 00:17:47.500
You were mentioning, I guess, I was suggesting kids
307
00:17:47.500 --> 00:17:50.370
were one extreme and yet you were all ready had been talking
308
00:17:50.370 --> 00:17:52.810
about board level which seems like another extreme.
309
00:17:52.810 --> 00:17:55.500
It seems like there's a lot of other people between
310
00:17:55.500 --> 00:17:58.600
those two extremes that would need to learn how to work
311
00:17:58.600 --> 00:18:02.100
together alongside and I guess just looking
312
00:18:02.100 --> 00:18:05.740
for some practical, how do businesses get people
313
00:18:05.740 --> 00:18:09.270
to be comfortable with their teammate as a machine
314
00:18:09.270 --> 00:18:12.450
versus their teammate as a normal worker?
315
00:18:12.450 --> 00:18:14.640
Actually for example, we've seen people completely
316
00:18:14.640 --> 00:18:16.540
impatient with robots.
317
00:18:16.540 --> 00:18:19.580
You know, if it's not perfect right off the bat then
318
00:18:19.580 --> 00:18:23.410
why am I bothering teaching this machine how to do this?
319
00:18:23.410 --> 00:18:26.160
You'd never be that impatient with another co-worker.
320
00:18:26.160 --> 00:18:29.800
You remember when you were first learning to do a job.
321
00:18:29.800 --> 00:18:33.040
So how do we get that same sort of I guess maybe empathy
322
00:18:33.040 --> 00:18:35.080
for the poor little machine?
323
00:18:35.080 --> 00:18:38.160
Yeah well, I think as I say I do think it's an education
324
00:18:38.160 --> 00:18:42.210
and training piece that that the company has to put in
325
00:18:42.210 --> 00:18:47.210
but also it's important because sometimes we over trust
326
00:18:47.880 --> 00:18:51.590
to the technology, so the computer told us to do it.
327
00:18:51.590 --> 00:18:54.400
You know that something that we'd been noticing for example,
328
00:18:54.400 --> 00:18:57.620
in the criminal sentencing problems that we've been having,
329
00:18:57.620 --> 00:19:01.120
where judges have been over reliant upon the fact
330
00:19:01.120 --> 00:19:03.400
that the machine's telling them this.
331
00:19:03.400 --> 00:19:08.400
And so it's that education to not over trust the machine
332
00:19:09.220 --> 00:19:13.810
and also trust the machine is not going to take your job,
333
00:19:13.810 --> 00:19:16.060
is not going to be spying on you,
334
00:19:16.060 --> 00:19:20.800
you know, there is sort of a lot of things that employees
335
00:19:20.800 --> 00:19:24.620
are frightened of and so you've got to make sure
336
00:19:24.620 --> 00:19:28.460
that they have some better understanding of what that robot
337
00:19:28.460 --> 00:19:32.420
or machinery is going to do with them.
338
00:19:32.420 --> 00:19:36.960
And that it's a human-machine interaction as opposed
339
00:19:36.960 --> 00:19:39.630
to one dominating the other.
340
00:19:39.630 --> 00:19:44.630
What's your thinking on to bring about large-scale
341
00:19:45.120 --> 00:19:48.630
understanding and change, not just at the board level,
342
00:19:48.630 --> 00:19:53.630
but from the fabric of the organization, how important
343
00:19:53.800 --> 00:19:58.800
is that companies begin to understand the different modes
344
00:19:59.140 --> 00:20:01.740
of interaction between AI and human
345
00:20:01.740 --> 00:20:04.220
and begin to test some of those things?
346
00:20:04.220 --> 00:20:06.650
Obviously, that's really important.
347
00:20:06.650 --> 00:20:11.170
We do have a project that's actually led by Salesforce
348
00:20:11.170 --> 00:20:13.750
called the Responsible Use of Technology.
349
00:20:13.750 --> 00:20:18.750
And we see in that, what we're trying to do is to bring
350
00:20:19.640 --> 00:20:23.300
together all the different companies, like BCG,
351
00:20:23.300 --> 00:20:26.940
who are actually thinking about these issues
352
00:20:26.940 --> 00:20:30.120
and come up with some best practices.
353
00:20:30.120 --> 00:20:35.120
So how do you help your employees to really think
354
00:20:35.450 --> 00:20:37.663
about this interaction with AI?
355
00:20:38.510 --> 00:20:43.300
How do you make sure that the company itself is focused
356
00:20:43.300 --> 00:20:47.100
on ethical deployment of technology,
357
00:20:47.100 --> 00:20:52.090
and where your employees are going to be working
358
00:20:52.090 --> 00:20:56.000
specifically with the technology that they don't fear it?
359
00:20:56.000 --> 00:21:00.980
I think there's a lot of fear and that is at the moment
360
00:21:00.980 --> 00:21:03.620
probably not useful at all.
361
00:21:03.620 --> 00:21:05.890
You clearly can't be friends with somebody
362
00:21:05.890 --> 00:21:07.600
if you're afraid of them.
363
00:21:07.600 --> 00:21:11.210
Yes, and what we are seeing is that, you know,
364
00:21:11.210 --> 00:21:14.720
when I was talking about AI and ethics in 2014,
365
00:21:14.720 --> 00:21:17.280
very few people were talking about it.
366
00:21:17.280 --> 00:21:22.280
Now everybody, not everybody, but every enlightened person
367
00:21:23.300 --> 00:21:28.050
is talking about it and business is talking about it.
368
00:21:28.050 --> 00:21:30.480
And we're talking about business here.
369
00:21:30.480 --> 00:21:33.950
Businesses talking about it, governments talking about it.
370
00:21:33.950 --> 00:21:37.990
Governments are talking about it in the, if there
371
00:21:37.990 --> 00:21:42.990
is something that is unsafe usually we regulate the unsafe.
372
00:21:43.440 --> 00:21:46.010
So I think actually the time is now to be having
373
00:21:46.010 --> 00:21:47.770
these conversations.
374
00:21:47.770 --> 00:21:49.010
Do we regulate?
375
00:21:49.010 --> 00:21:52.560
Do we depend upon more soft law approaches?
376
00:21:52.560 --> 00:21:57.560
Because what we are setting now in place is the future.
377
00:21:59.300 --> 00:22:02.090
And it's not just our terrestrial future, but that if we're
378
00:22:02.090 --> 00:22:05.340
going to go to Mars, we're going to use a lot of AI.
379
00:22:05.340 --> 00:22:08.330
We need to be really having these conversations,
380
00:22:08.330 --> 00:22:11.490
and one of the things that we have been doing is having
381
00:22:11.490 --> 00:22:14.540
a conversation that looks at positive futures.
382
00:22:14.540 --> 00:22:18.180
So you can sort of look across the panoply of sci-fi
383
00:22:18.180 --> 00:22:21.170
and it's almost all dystopian.
384
00:22:21.170 --> 00:22:24.940
And so what we wanted to do is say, "Okay,
385
00:22:24.940 --> 00:22:29.550
we have this potential with AI, what do we want to create?"
386
00:22:29.550 --> 00:22:33.100
And so we brought sci-fi writers and AI scientists
387
00:22:33.100 --> 00:22:36.640
and business and economists and people together
388
00:22:36.640 --> 00:22:39.750
to really, sort of, have that conversation.
389
00:22:39.750 --> 00:22:42.360
So we're having the conversation about AI ethics,
390
00:22:42.360 --> 00:22:46.370
but the next conversation has to be how do we systematically
391
00:22:46.370 --> 00:22:49.450
want to grow and develop AI for the benefit of the world
392
00:22:49.450 --> 00:22:52.170
and not just sectors of it?
393
00:22:52.170 --> 00:22:56.160
I could recall the flavor of these kinds of conversations
394
00:22:56.160 --> 00:22:57.540
I would have five years ago,
395
00:22:57.540 --> 00:23:00.040
it was very heavily tech focused.
396
00:23:00.040 --> 00:23:05.040
What does that tell you in terms of a profile of, you know,
397
00:23:05.400 --> 00:23:10.400
future leaders of AI, what is the right sort of traits,
398
00:23:10.870 --> 00:23:14.630
skills, sort of profiles do you think?
399
00:23:14.630 --> 00:23:17.540
I think we will see, so I have a humanities background,
400
00:23:17.540 --> 00:23:19.950
I think we'll see more humanities so, you know,
401
00:23:19.950 --> 00:23:24.950
there's the AI piece that the technologists have to work on.
402
00:23:25.530 --> 00:23:29.500
But what we do know is that, there's a Gartner study
403
00:23:29.500 --> 00:23:33.500
that says that by 2022 if we don't deal with the bias,
404
00:23:33.500 --> 00:23:37.840
85% for algorithms will be erroneous because of the bias.
405
00:23:37.840 --> 00:23:42.710
If that's anywhere near true, that's really bad for your R&D
406
00:23:42.710 --> 00:23:44.470
and your company.
407
00:23:44.470 --> 00:23:46.810
So what we know is that we have to create
408
00:23:46.810 --> 00:23:51.810
those multi-stakeholder teams, and also I see the future
409
00:23:52.110 --> 00:23:56.447
of AI, this discussion as part of ESG.
410
00:23:57.330 --> 00:24:02.330
So I see the AI ethics discussion moving into that more
411
00:24:04.540 --> 00:24:08.270
social realm of the way that companies think about
412
00:24:08.270 --> 00:24:09.920
some of the things that they do.
413
00:24:09.920 --> 00:24:12.620
And that's something that we heard from for example,
414
00:24:12.620 --> 00:24:16.290
Prakah at Walmart that they're, they're thinking big picture
415
00:24:16.290 --> 00:24:19.890
about how these would connect and remove inefficiencies
416
00:24:19.890 --> 00:24:24.120
from the process and that certainly has ESG implications.
417
00:24:24.120 --> 00:24:26.300
What we've seen with some of the other folks we've discussed
418
00:24:26.300 --> 00:24:29.320
artificial intelligence in business with, is that
419
00:24:29.320 --> 00:24:31.790
they've transferred learning from things that they've done
420
00:24:31.790 --> 00:24:34.350
in one organization to another.
421
00:24:34.350 --> 00:24:38.120
They've moved this education component that you've mentioned
422
00:24:38.120 --> 00:24:40.900
before, does not happened within companies, it's happened
423
00:24:40.900 --> 00:24:44.320
across companies and it's happened across functional areas.
424
00:24:44.320 --> 00:24:45.640
How do we encourage that?
425
00:24:45.640 --> 00:24:49.390
How do we get people to have those adverse experiences?
426
00:24:49.390 --> 00:24:51.930
Yes, I think that that's a. right
427
00:24:51.930 --> 00:24:54.950
and b. really important that we do.
428
00:24:54.950 --> 00:24:59.020
So I was actually talking to somebody yesterday who had set
429
00:24:59.020 --> 00:25:03.150
up some really good resources and training around artificial
430
00:25:03.150 --> 00:25:07.260
intelligence in a bank then moved to government
431
00:25:07.260 --> 00:25:10.557
and then moved to a yet another private sector job
432
00:25:10.557 --> 00:25:12.730
and is doing the same thing.
433
00:25:12.730 --> 00:25:16.450
And many of the trainings that we need to be thinking about
434
00:25:16.450 --> 00:25:19.640
with artificial intelligence are cross-sectoral.
435
00:25:19.640 --> 00:25:24.640
So we did an interesting look at all the ethical principles
436
00:25:25.370 --> 00:25:30.110
that are out there, there are over 190 now from The Beijing
437
00:25:30.110 --> 00:25:35.110
Principles through to the Asilomar ones, et cetera.
438
00:25:36.000 --> 00:25:37.480
That's different from 2014.
439
00:25:37.480 --> 00:25:39.840
It's very different from 2014.
440
00:25:39.840 --> 00:25:42.910
And one of the things that a lot of people sort of have said
441
00:25:42.910 --> 00:25:45.780
to me in the past is well, whose ethics are you talking
442
00:25:45.780 --> 00:25:46.680
about anyway?
443
00:25:46.680 --> 00:25:51.390
And what we found was actually there were about 10 things
444
00:25:51.390 --> 00:25:55.260
that were ubiquitous to all of those 190 different
445
00:25:55.260 --> 00:25:56.340
ethical principles.
446
00:25:56.340 --> 00:25:59.960
So there are 10 things that we care about as human beings
447
00:25:59.960 --> 00:26:01.940
wherever we are in the world.
448
00:26:01.940 --> 00:26:05.240
And those are 10 things that are actually fairly crossed
449
00:26:05.240 --> 00:26:09.070
for sectorial, so there are about safety and robustness.
450
00:26:09.070 --> 00:26:12.597
They're about accountability, transparency,
451
00:26:12.597 --> 00:26:13.780
explainability.
452
00:26:13.780 --> 00:26:16.500
They're about that conversation we had earlier
453
00:26:16.500 --> 00:26:18.900
human-machine interaction.
454
00:26:18.900 --> 00:26:23.900
Then there are about how does AI benefit us as humans.
455
00:26:24.330 --> 00:26:28.660
So I think that ability to be able to take what
456
00:26:28.660 --> 00:26:31.510
you've learned in one sector and move it to another
457
00:26:31.510 --> 00:26:35.770
is important and relatively straightforward.
458
00:26:35.770 --> 00:26:37.290
And also it seems very human.
459
00:26:37.290 --> 00:26:38.340
Yeah.
460
00:26:38.340 --> 00:26:40.530
That's something I think that the machines themselves
461
00:26:40.530 --> 00:26:41.670
are going to struggle with and need
462
00:26:41.670 --> 00:26:43.560
at least our help for a while.
463
00:26:43.560 --> 00:26:45.010
Oh, undoubtedly yes.
464
00:26:45.010 --> 00:26:47.662
And it probably doesn't need saying to this audience,
465
00:26:47.662 --> 00:26:51.590
but it's worth saying that these machines
466
00:26:51.590 --> 00:26:53.523
are not really very clever yet.
467
00:26:53.523 --> 00:26:56.010
Yeah, there's still time, we're still okay.
468
00:26:56.010 --> 00:26:57.280
Thank God for that.
469
00:26:57.280 --> 00:26:59.150
(men and woman laughs)
470
00:26:59.150 --> 00:27:00.890
Okay, thank you for taking the time to talk to us,
471
00:27:00.890 --> 00:27:02.200
we've really enjoyed it.
472
00:27:02.200 --> 00:27:03.440
Yeah, thank you so much Kay.
473
00:27:03.440 --> 00:27:06.080
It's been a pleasure hearing your views
474
00:27:06.080 --> 00:27:08.350
and your leadership on this topic.
475
00:27:08.350 --> 00:27:09.980
Thank you so much to both of you,
476
00:27:09.980 --> 00:27:12.360
it's been a pleasure and a privilege to be with you.
477
00:27:12.360 --> 00:27:14.800
I could have talked on for hours.
478
00:27:14.800 --> 00:27:17.290
But we can't because that is the end of our episode,
479
00:27:17.290 --> 00:27:20.210
and that is the end of our first season.
480
00:27:20.210 --> 00:27:22.600
Thank you for joining us on this podcast.
481
00:27:22.600 --> 00:27:23.700
Thank you very much.
482
00:27:25.737 --> 00:27:28.820
(calm upbeat music)
483
00:27:32.130 --> 00:27:34.910
Thanks for listening to "Me, Myself, and AI",
484
00:27:34.910 --> 00:27:37.090
if you're enjoying the show, take a minute
485
00:27:37.090 --> 00:27:38.630
to write us a review.
486
00:27:38.630 --> 00:27:41.480
If you send us a screenshot, we'll send you a collection
487
00:27:41.480 --> 00:27:45.160
of MIT SMR's best articles on artificial intelligence
488
00:27:45.160 --> 00:27:47.080
free for a limited time.
489
00:27:47.080 --> 00:27:51.853
Send your review screenshot to smrfeedback@mit.edu.
490
00:27:52.705 --> 00:27:55.788
(calm upbeat music)