WEBVTT
1
00:00:00.451 --> 00:00:01.284
(calm music)
2
00:00:01.284 --> 00:00:02.950
Ethical use of technology is
3
00:00:02.950 --> 00:00:05.910
and should be a concern for organizations everywhere,
4
00:00:05.910 --> 00:00:07.800
but it's complicated.
5
00:00:07.800 --> 00:00:10.170
Today we talk with Elizabeth Renieris,
6
00:00:10.170 --> 00:00:11.060
Founding Director of
7
00:00:11.060 --> 00:00:13.830
the Notre Dame IBM Technology Ethics Lab,
8
00:00:13.830 --> 00:00:15.940
about what organizations can do today
9
00:00:15.940 --> 00:00:17.890
without waiting for the perfect answer.
10
00:00:19.900 --> 00:00:21.430
Welcome to Me, Myself, and AI,
11
00:00:21.430 --> 00:00:24.630
a podcast on artificial intelligence in business.
12
00:00:24.630 --> 00:00:28.480
Each episode we introduce you to someone innovating with AI.
13
00:00:28.480 --> 00:00:30.980
I'm Sam Ramsbotham, Professor of Information Systems
14
00:00:30.980 --> 00:00:33.000
at Boston College.
15
00:00:33.000 --> 00:00:34.700
I'm also the guest editor for the
16
00:00:34.700 --> 00:00:37.310
AI and Business Strategy Big Idea program
17
00:00:37.310 --> 00:00:40.100
at MIT Sloan Management Review.
18
00:00:40.100 --> 00:00:42.270
And I'm Shervin Khodabandeh,
19
00:00:42.270 --> 00:00:43.980
Senior Partner with BCG
20
00:00:43.980 --> 00:00:47.790
and I co-lead BCG's AI practice in North America
21
00:00:47.790 --> 00:00:51.870
and together MIT SMR and BCG have been researching AI
22
00:00:51.870 --> 00:00:55.166
for five years, interviewing hundreds of practitioners,
23
00:00:55.166 --> 00:00:58.200
and surveying thousands of companies on what it takes
24
00:00:58.200 --> 00:01:02.120
to build and to deploy and scale AI capabilities
25
00:01:02.120 --> 00:01:03.500
across the organization
26
00:01:03.500 --> 00:01:06.053
and really transform the way organizations operate.
27
00:01:08.680 --> 00:01:10.530
Today we're talking with Elizabeth Renieris.
28
00:01:10.530 --> 00:01:12.550
Elizabeth is the Founding Director
29
00:01:12.550 --> 00:01:15.418
of the Notre Dame IBM Technology Ethics Lab
30
00:01:15.418 --> 00:01:18.670
as well as founder and CEO of HACKYLAWYER.
31
00:01:18.670 --> 00:01:20.700
Elizabeth, thanks for taking the time to talk with us today.
32
00:01:20.700 --> 00:01:21.533
Welcome.
33
00:01:21.533 --> 00:01:22.930
Thanks for having me.
34
00:01:22.930 --> 00:01:24.560
Let's start with your current role,
35
00:01:24.560 --> 00:01:27.933
your current new role at the Notre Dame IBM.
36
00:01:27.933 --> 00:01:28.958
I was going to ask which one?
37
00:01:28.958 --> 00:01:30.340
(all laughing)
38
00:01:30.340 --> 00:01:33.450
Well, I was thinking about the ethics lab
39
00:01:33.450 --> 00:01:36.450
but you actually can start with whatever you like.
40
00:01:36.450 --> 00:01:38.530
Sure, so as you mentioned,
41
00:01:38.530 --> 00:01:40.879
I've been recently appointed as the Founding Director
42
00:01:40.879 --> 00:01:43.329
of a new technology ethics lab
43
00:01:43.329 --> 00:01:46.038
at the University of Notre Dame.
44
00:01:46.038 --> 00:01:47.430
It's actually called
45
00:01:47.430 --> 00:01:49.750
the Notre Dame IBM Technology Ethics Lab
46
00:01:49.750 --> 00:01:52.186
as the generous seed funding is actually from IBM.
47
00:01:52.186 --> 00:01:55.170
My appointment is a faculty appointment with
48
00:01:55.170 --> 00:01:58.560
the University of Notre Dame and the intention of the lab
49
00:01:58.560 --> 00:02:00.550
is to compliment Notre Dame's existing
50
00:02:00.550 --> 00:02:04.730
Technology Ethics Center which is a very traditional
51
00:02:04.730 --> 00:02:07.123
academic research center focused on technology ethics.
52
00:02:07.123 --> 00:02:10.670
So you can imagine there are many tenured faculty members
53
00:02:10.670 --> 00:02:12.530
affiliated with the center and they produce sort
54
00:02:12.530 --> 00:02:14.150
of traditional academic research,
55
00:02:14.150 --> 00:02:15.970
peer-previewed journal articles.
56
00:02:15.970 --> 00:02:19.210
The lab in contrast to that is meant to focus
57
00:02:19.210 --> 00:02:21.580
on practitioner-oriented artifacts.
58
00:02:21.580 --> 00:02:23.525
So the things that we want to produce are
59
00:02:23.525 --> 00:02:27.150
for audiences that include companies themselves,
60
00:02:27.150 --> 00:02:29.350
but also for law and policy makers,
61
00:02:29.350 --> 00:02:32.400
also for civil society and other stakeholders.
62
00:02:32.400 --> 00:02:35.130
And we want them to be very tangible and very practical.
63
00:02:35.130 --> 00:02:37.085
So we're looking to produce things like
64
00:02:37.085 --> 00:02:39.746
open source tool kits, and model legislation,
65
00:02:39.746 --> 00:02:43.230
and explainer videos, and model audits,
66
00:02:43.230 --> 00:02:45.240
and a whole array of things that
67
00:02:45.240 --> 00:02:47.570
you wouldn't necessarily find from a traditional
68
00:02:47.570 --> 00:02:49.470
academic research center.
69
00:02:49.470 --> 00:02:52.460
What we really need in this space is we need centers
70
00:02:52.460 --> 00:02:54.130
and institutions that can translate
71
00:02:54.130 --> 00:02:56.360
between academia and practice.
72
00:02:56.360 --> 00:02:58.850
The beauty of housing the lab in the university
73
00:02:58.850 --> 00:03:01.070
of course is having access to the faculty
74
00:03:01.070 --> 00:03:02.560
that's generating the scholarship
75
00:03:02.560 --> 00:03:04.810
and the theoretical foundations for the work.
76
00:03:05.665 --> 00:03:06.498
Can you comment a bit more
77
00:03:06.498 --> 00:03:08.100
on how you guys make that happen?
78
00:03:08.100 --> 00:03:10.750
Because I know there's a lot of primary research
79
00:03:10.750 --> 00:03:13.360
and then you have the faculty's point of view,
80
00:03:13.360 --> 00:03:16.580
and I assume that there's also industry connections
81
00:03:16.580 --> 00:03:21.330
and some of these applications in real life come in to play,
82
00:03:21.330 --> 00:03:22.908
which is really important as you say.
83
00:03:22.908 --> 00:03:26.610
What are some of the ways you guys enable that?
84
00:03:26.610 --> 00:03:28.550
Right now as we're getting up and running,
85
00:03:28.550 --> 00:03:30.840
really what we're focusing on is convening power.
86
00:03:30.840 --> 00:03:33.330
So we're looking to convene groups of people
87
00:03:33.330 --> 00:03:35.170
who aren't necessarily talking to each other
88
00:03:35.170 --> 00:03:36.530
and to do a lot of that translation work.
89
00:03:36.530 --> 00:03:41.270
So right now the intention is to put out
90
00:03:41.270 --> 00:03:43.970
an official call for proposals to the general public
91
00:03:43.970 --> 00:03:46.730
and to be sourcing projects from all the different
92
00:03:46.730 --> 00:03:49.820
stakeholders that I outlined consisting of teams
93
00:03:49.820 --> 00:03:52.510
of individuals who come from different industries
94
00:03:52.510 --> 00:03:56.500
and sectors and represent different sectors of society.
95
00:03:56.500 --> 00:03:59.300
And to have them focus on projects that actually try
96
00:03:59.300 --> 00:04:00.710
and solve real-world challenges.
97
00:04:00.710 --> 00:04:03.560
So for example, right now during the pandemic,
98
00:04:03.560 --> 00:04:04.860
those challenges might be something
99
00:04:04.860 --> 00:04:07.362
like returning to work or returning to school.
100
00:04:07.362 --> 00:04:08.860
And then of course,
101
00:04:08.860 --> 00:04:10.740
what we want to do as the lab is we want to
102
00:04:10.740 --> 00:04:13.260
take the brilliant work that the faculty
103
00:04:13.260 --> 00:04:15.783
at Notre Dame is doing, and eventually elsewhere,
104
00:04:15.783 --> 00:04:18.413
and leverage that to sort of underpin
105
00:04:18.413 --> 00:04:22.040
and to inform the actual projects that we're sourcing.
106
00:04:22.040 --> 00:04:24.480
And we can hopefully build some kind of narrative arc
107
00:04:24.480 --> 00:04:27.870
around how you start translating that theory into practice.
108
00:04:27.870 --> 00:04:30.550
It seems like you're looking at ethics in AI
109
00:04:30.550 --> 00:04:32.420
from two sides, right?
110
00:04:32.420 --> 00:04:35.563
One is the ethics of the technology itself,
111
00:04:35.563 --> 00:04:39.357
as in, is what the technology doing ethical
112
00:04:39.357 --> 00:04:41.830
and how do you make sure it is ethical
113
00:04:43.207 --> 00:04:47.607
and how can technology help the ethics conversation itself?
114
00:04:48.478 --> 00:04:49.970
I think it's absolutely both.
115
00:04:49.970 --> 00:04:52.190
In my mind, you cannot separate
116
00:04:52.190 --> 00:04:53.700
a conversation about technology ethics
117
00:04:53.700 --> 00:04:55.790
from a conversation about values,
118
00:04:55.790 --> 00:04:58.820
both individual values and collective societal values.
119
00:04:58.820 --> 00:05:01.270
So what I find really fascinating about this space
120
00:05:01.270 --> 00:05:03.340
is that you're right, while we're looking at the
121
00:05:03.340 --> 00:05:05.977
ethical challenges presented by specific technologies,
122
00:05:05.977 --> 00:05:08.498
we're also then confronted with having to identify
123
00:05:08.498 --> 00:05:12.156
and prioritize and reconcile competing values
124
00:05:12.156 --> 00:05:14.450
of different people and communities
125
00:05:14.450 --> 00:05:16.530
and stakeholders in the conversation.
126
00:05:16.530 --> 00:05:18.990
And you know, when we have a specific challenge
127
00:05:18.990 --> 00:05:21.150
or a specific technology, it actually really turns
128
00:05:21.150 --> 00:05:24.160
the mirror back on us as a society and forces us
129
00:05:24.160 --> 00:05:26.880
to ask the question of what kind of society do we want to be
130
00:05:26.880 --> 00:05:28.410
or what kind of company do we want to be,
131
00:05:28.410 --> 00:05:30.954
or what kind of you know, individual
132
00:05:30.954 --> 00:05:31.787
or researcher do we want to be
133
00:05:31.787 --> 00:05:33.703
and what are our values and how do those values align
134
00:05:33.703 --> 00:05:35.370
with what it is that we're working on
135
00:05:35.370 --> 00:05:36.700
from a technology standpoint?
136
00:05:36.700 --> 00:05:39.360
So I believe it's absolutely both.
137
00:05:39.360 --> 00:05:41.320
And I think that's also been part of the evolution
138
00:05:41.320 --> 00:05:44.020
of the ethics conversation in the last couple of years
139
00:05:44.020 --> 00:05:46.510
is that while perhaps it started out with the lens very
140
00:05:46.510 --> 00:05:49.300
much on the technology, it's been very much turned around
141
00:05:49.300 --> 00:05:51.670
and focused on, you know, who's building it?
142
00:05:51.670 --> 00:05:53.710
Who's at the table, what's the conversation?
143
00:05:53.710 --> 00:05:54.850
What are the parameters?
144
00:05:54.850 --> 00:05:55.750
What do we count?
145
00:05:55.750 --> 00:05:57.210
What values matter?
146
00:05:57.210 --> 00:05:59.660
And actually from my standpoint, those are the really
147
00:05:59.660 --> 00:06:02.150
important questions that hopefully technology
148
00:06:02.150 --> 00:06:05.390
is an entry point for us to discuss.
149
00:06:05.390 --> 00:06:06.820
Sam, I was just going to ask Elizabeth
150
00:06:06.820 --> 00:06:09.880
to maybe share with us how she ended up here,
151
00:06:09.880 --> 00:06:11.127
like the path that you took, that-
152
00:06:11.127 --> 00:06:12.944
-How much time do you have?
153
00:06:12.944 --> 00:06:13.777
(all laughing)
154
00:06:13.777 --> 00:06:15.050
I'll give you the abbreviated version.
155
00:06:15.050 --> 00:06:18.262
So I was classmates with Mark Zuckerberg at Harvard
156
00:06:18.262 --> 00:06:21.902
and I've been thinking about these issues ever since.
157
00:06:21.902 --> 00:06:23.980
But more seriously,
158
00:06:23.980 --> 00:06:26.600
my sort of professional trajectory was that
159
00:06:26.600 --> 00:06:27.650
after law school,
160
00:06:27.650 --> 00:06:30.020
I worked at Department of Homeland Security
161
00:06:30.020 --> 00:06:33.241
for a couple of years in their General Counsel's office.
162
00:06:33.241 --> 00:06:34.904
And this was a long time after 9/11,
163
00:06:34.904 --> 00:06:37.010
and I actually am from New York
164
00:06:37.010 --> 00:06:39.340
and have vivid memories of the event.
165
00:06:39.340 --> 00:06:42.110
And I was really struck by how much
166
00:06:42.110 --> 00:06:45.576
of the emergency infrastructure was still in place
167
00:06:45.576 --> 00:06:50.200
more than a decade after it was initially rolled out.
168
00:06:50.200 --> 00:06:54.555
And I subsequently went back to obtain an LLM in London
169
00:06:54.555 --> 00:06:58.400
and accidentally having arrived in the year 2012,
170
00:06:58.400 --> 00:06:59.970
started working on the first draft
171
00:06:59.970 --> 00:07:02.120
of what became the General Data Protection Regulation
172
00:07:02.120 --> 00:07:03.650
or the GDPR.
173
00:07:03.650 --> 00:07:05.940
And through that process gained a lot of exposure
174
00:07:05.940 --> 00:07:10.070
to the ad tech industry, the FinTech industry,
175
00:07:10.070 --> 00:07:12.375
somewhere along the way read the Bitcoin white paper,
176
00:07:12.375 --> 00:07:14.520
came back to the states
177
00:07:14.520 --> 00:07:16.420
just before the referendum, and was branded
178
00:07:16.420 --> 00:07:17.320
a blockchain lawyer
179
00:07:17.320 --> 00:07:19.428
because I had read the Bitcoin white paper.
180
00:07:19.428 --> 00:07:21.349
(all laughing)
181
00:07:21.349 --> 00:07:23.180
So then I had this interesting dance
182
00:07:23.180 --> 00:07:25.440
in trying to be a data protection and privacy lawyer
183
00:07:25.440 --> 00:07:27.440
and also, you know, split my time
184
00:07:27.440 --> 00:07:29.648
with the sort of blockchain distributed ledger folks.
185
00:07:29.648 --> 00:07:32.480
And I quickly picked up on some of the unsavory,
186
00:07:32.480 --> 00:07:36.210
unethical behavior that I saw in the space
187
00:07:36.210 --> 00:07:37.680
and I was really bothered by it
188
00:07:37.680 --> 00:07:40.320
and it also sort of triggered these memories
189
00:07:40.320 --> 00:07:41.970
of the experience with Mark Zuckerberg
190
00:07:41.970 --> 00:07:44.340
scraping faces of my classmates in university.
191
00:07:44.340 --> 00:07:46.480
And it was just an interesting thing that
192
00:07:46.480 --> 00:07:48.240
I didn't appreciate at the time
193
00:07:48.240 --> 00:07:50.320
but sort of bubbled in the background.
194
00:07:50.320 --> 00:07:51.650
And that led me to actually work
195
00:07:51.650 --> 00:07:53.613
in-house at a couple of companies'
196
00:07:53.613 --> 00:07:56.221
startups based in Silicon Valley and elsewhere.
197
00:07:56.221 --> 00:07:58.940
And there was more of this sort of unsavory behavior.
198
00:07:58.940 --> 00:08:01.500
And I thought if only we could talk about technology
199
00:08:01.500 --> 00:08:04.120
we can engage with technology and be excited about it
200
00:08:04.120 --> 00:08:07.190
without all of these terrible downsides.
201
00:08:07.190 --> 00:08:09.240
I think part of the reason I was observing that
202
00:08:09.240 --> 00:08:11.220
is because you didn't have the right people in the room.
203
00:08:11.220 --> 00:08:13.580
So you had technologists that were talking
204
00:08:13.580 --> 00:08:15.910
past and over lawyers and policy makers.
205
00:08:15.910 --> 00:08:19.300
And that was my idea in late 2017
206
00:08:19.300 --> 00:08:21.440
to start my HACKYLAWYER consultancy.
207
00:08:21.440 --> 00:08:23.840
And the idea with that was, you know
208
00:08:23.840 --> 00:08:25.270
I'm fairly technically savvy
209
00:08:25.270 --> 00:08:27.760
but I have this great training, these legal skills,
210
00:08:27.760 --> 00:08:29.160
these public policy skills.
211
00:08:29.160 --> 00:08:31.510
I'd like to be able to translate across those groups
212
00:08:31.510 --> 00:08:33.500
and bring them together and
213
00:08:33.500 --> 00:08:35.870
built up a pretty successful consultancy
214
00:08:35.870 --> 00:08:38.120
around that for a couple of years thereafter.
215
00:08:38.960 --> 00:08:41.870
That's a very inspiring story from the beginning
216
00:08:41.870 --> 00:08:43.350
to here.
217
00:08:43.350 --> 00:08:44.870
Kind of want to ask about the long version,
218
00:08:44.870 --> 00:08:46.533
but I don't know that we have time for that.
219
00:08:46.533 --> 00:08:48.120
(laughing)
220
00:08:48.120 --> 00:08:50.205
Let's follow up on a couple of things that you mentioned.
221
00:08:50.205 --> 00:08:53.440
One is, I think you and Shervin both talked
222
00:08:53.440 --> 00:08:56.030
about this briefly, but you know, there's a little bit
223
00:08:56.030 --> 00:08:58.570
of excitement about some of the bad things that happen.
224
00:08:58.570 --> 00:09:03.500
You know when we see these cases of AI bias coming out
225
00:09:03.500 --> 00:09:06.290
and they make headlines, you know, there's also
226
00:09:06.290 --> 00:09:08.800
a silver lining and that lining is pretty thick
227
00:09:08.800 --> 00:09:11.160
that it's really highlighting some of these things
228
00:09:11.160 --> 00:09:13.720
that are already existing or already going on
229
00:09:13.720 --> 00:09:15.597
and these seem like opportunities.
230
00:09:15.597 --> 00:09:17.940
But then at the same time, you also mentioned how
231
00:09:17.940 --> 00:09:20.232
when we react to those, we put things in place
232
00:09:20.232 --> 00:09:22.120
and more than a decade later,
233
00:09:22.120 --> 00:09:25.400
the DHS protections were still in place.
234
00:09:25.400 --> 00:09:28.234
So how do we balance between reacting to these things
235
00:09:28.234 --> 00:09:31.640
that come up between addressing biases
236
00:09:32.480 --> 00:09:35.280
and not putting in draconian measures
237
00:09:35.280 --> 00:09:37.470
that stifle innovation?
238
00:09:37.470 --> 00:09:39.770
You're right that there are opportunities.
239
00:09:39.770 --> 00:09:42.730
I think the idea is that depending
240
00:09:42.730 --> 00:09:45.330
on the challenge presented, I don't like the frame
241
00:09:45.330 --> 00:09:48.934
of stifling innovation or the tension between, you know,
242
00:09:48.934 --> 00:09:49.990
innovation and other values
243
00:09:49.990 --> 00:09:52.730
like security or privacy or safety.
244
00:09:52.730 --> 00:09:53.620
I think we're seeing this play
245
00:09:53.620 --> 00:09:54.960
out again in the pandemic, right?
246
00:09:54.960 --> 00:09:58.600
Where we are being often pushed a narrative
247
00:09:58.600 --> 00:10:01.570
around technologies that we need to deploy
248
00:10:01.570 --> 00:10:03.100
and technologies that we need to adopt
249
00:10:03.100 --> 00:10:05.330
in order to cope with the pandemic.
250
00:10:05.330 --> 00:10:06.960
And so we saw this in the debate
251
00:10:06.960 --> 00:10:09.670
over exposure notification and contact tracing apps.
252
00:10:09.670 --> 00:10:12.110
We're seeing this right now, very prominently
253
00:10:12.110 --> 00:10:13.310
in the conversation around things
254
00:10:13.310 --> 00:10:15.782
like immunity certificates and vaccine passports.
255
00:10:15.782 --> 00:10:17.756
I think the value of ethics there again,
256
00:10:17.756 --> 00:10:21.816
is that rather than look at the kind of narrow particulars
257
00:10:21.816 --> 00:10:24.390
and tweak around the edges of a specific technology
258
00:10:24.390 --> 00:10:27.680
or implementation, to step back and have that conversation
259
00:10:27.680 --> 00:10:30.380
about values and to have the conversation
260
00:10:30.380 --> 00:10:33.320
about what will we think of this in five or 10 years?
261
00:10:33.320 --> 00:10:35.220
So the silver lining of what happened
262
00:10:35.220 --> 00:10:37.820
after 9/11 was we've learned a lot of lessons about it.
263
00:10:37.820 --> 00:10:39.310
We've seen how, you know
264
00:10:39.310 --> 00:10:42.433
emergency infrastructure often becomes permanent.
265
00:10:42.433 --> 00:10:45.112
We've seen how those trade-offs in the moment
266
00:10:45.112 --> 00:10:47.840
might not be the right trade-offs in the long run.
267
00:10:47.840 --> 00:10:51.094
So I think if we don't take lessons from those,
268
00:10:51.094 --> 00:10:52.900
and this is where it's really interesting
269
00:10:52.900 --> 00:10:53.733
in technology ethics,
270
00:10:53.733 --> 00:10:56.350
how there's so much intersection with other things
271
00:10:56.350 --> 00:10:58.760
like STS and other fields around history
272
00:10:58.760 --> 00:11:00.790
and anthropology and why it's so critical
273
00:11:00.790 --> 00:11:02.642
to have this really interdisciplinary perspective.
274
00:11:02.642 --> 00:11:05.360
Because all of those things,
275
00:11:05.360 --> 00:11:08.007
again go back to a conversation about values and trade-offs
276
00:11:08.007 --> 00:11:10.100
and the prioritization of all of those.
277
00:11:10.100 --> 00:11:11.300
And some of that also of course
278
00:11:11.300 --> 00:11:12.970
has to do with time horizon,
279
00:11:12.970 --> 00:11:14.440
going back to your question before.
280
00:11:14.440 --> 00:11:16.530
So it's easy to take the short view,
281
00:11:16.530 --> 00:11:18.032
it can be hard to take the long view.
282
00:11:18.032 --> 00:11:20.239
I think if you have sort of an ethical lens
283
00:11:20.239 --> 00:11:22.213
it's important to balance both.
284
00:11:22.213 --> 00:11:24.320
Yeah, and also I think you're raising
285
00:11:24.320 --> 00:11:29.180
an interesting point that is with AI particularly,
286
00:11:29.180 --> 00:11:34.180
the consequences of a misstep is very longterm
287
00:11:36.000 --> 00:11:38.590
because the algorithms keep getting embedded
288
00:11:38.590 --> 00:11:40.010
and they multiply
289
00:11:40.010 --> 00:11:44.900
and by the time you find out, it might not be as easy
290
00:11:44.900 --> 00:11:47.710
as you just replace it with a different one
291
00:11:47.710 --> 00:11:50.280
because it has a cascading effect.
292
00:11:50.280 --> 00:11:51.701
On the point about innovation,
293
00:11:51.701 --> 00:11:56.392
AI can play a role in helping us be more ethical.
294
00:11:56.392 --> 00:11:58.071
We've seen examples,
295
00:11:58.071 --> 00:12:01.930
I think one of our guests talked about MasterCard, right?
296
00:12:01.930 --> 00:12:05.838
How they're using AI to understand the unconscious
297
00:12:05.838 --> 00:12:10.337
or unintended biases that their employees might have.
298
00:12:10.337 --> 00:12:13.900
What is your views on that, on AI specifically
299
00:12:13.900 --> 00:12:17.520
as a tool to really give us a better lens
300
00:12:17.520 --> 00:12:20.400
into biases that might exist?
301
00:12:20.400 --> 00:12:22.258
I think the challenge with AI is
302
00:12:22.258 --> 00:12:25.560
that it's so broad and definitions of AI
303
00:12:25.560 --> 00:12:27.810
really abound and there's not entire consensus
304
00:12:27.810 --> 00:12:29.518
around what we're even talking about.
305
00:12:29.518 --> 00:12:31.620
And so I think there's the risk that we sort of
306
00:12:31.620 --> 00:12:34.160
use this broad brush to characterize things
307
00:12:34.160 --> 00:12:36.640
that may or may not be beneficial.
308
00:12:36.640 --> 00:12:39.330
And then we run the risk of decontextualizing.
309
00:12:39.330 --> 00:12:41.820
So we can say, you know, we have a better outcome
310
00:12:41.820 --> 00:12:42.940
but relative to what?
311
00:12:42.940 --> 00:12:45.220
Or what were the trade-offs involved?
312
00:12:45.220 --> 00:12:47.240
And I think it's not just AI
313
00:12:47.240 --> 00:12:49.520
but it's the combination of a lot of new
314
00:12:49.520 --> 00:12:52.430
and advanced technologies that together are more
315
00:12:52.430 --> 00:12:53.810
than the sum of their parts, right?
316
00:12:53.810 --> 00:12:55.870
So AI plus network technologies,
317
00:12:55.870 --> 00:12:58.390
plus some of the ones I've mentioned earlier,
318
00:12:58.390 --> 00:13:02.050
I think are that much harder to sort of unwind
319
00:13:02.050 --> 00:13:04.290
or course correct, or, you know,
320
00:13:04.290 --> 00:13:06.070
remedy when things go wrong.
321
00:13:06.070 --> 00:13:08.422
So one of the challenges I see in the space
322
00:13:08.422 --> 00:13:11.020
is that again we can tweak around the edges
323
00:13:11.020 --> 00:13:13.470
and we'll look at a specific implementation
324
00:13:13.470 --> 00:13:14.990
or a specific tech stack
325
00:13:14.990 --> 00:13:17.360
and we won't look at it in the broader context.
326
00:13:17.360 --> 00:13:19.029
It's how does that fit into a system
327
00:13:19.029 --> 00:13:20.940
and what are the feedback loops
328
00:13:20.940 --> 00:13:24.260
and what are the implications for the system as a whole?
329
00:13:24.260 --> 00:13:26.029
And I think that's one of the areas where
330
00:13:26.029 --> 00:13:28.980
the technology ethics conversation is really useful
331
00:13:28.980 --> 00:13:32.400
particularly when you look at things like relational ethics
332
00:13:32.400 --> 00:13:34.450
and things that are a lot more concerned
333
00:13:34.450 --> 00:13:36.104
with systems and relationships
334
00:13:36.104 --> 00:13:39.470
and the interdependencies between them.
335
00:13:39.470 --> 00:13:40.830
I worry that we're a little too soon
336
00:13:40.830 --> 00:13:42.640
to declare victory there
337
00:13:42.640 --> 00:13:45.470
but definitely something to keep an eye on.
338
00:13:45.470 --> 00:13:46.430
Yeah I mean as you say,
339
00:13:46.430 --> 00:13:47.890
the devil's in the detail.
340
00:13:47.890 --> 00:13:49.460
This is the beginning of having a dialogue
341
00:13:49.460 --> 00:13:51.892
and having a conversation on a topic that otherwise,
342
00:13:51.892 --> 00:13:54.380
you know, would not even be on the radar
343
00:13:54.380 --> 00:13:56.283
of many, many people.
344
00:13:56.283 --> 00:14:00.280
What is your advice to executives and technologists
345
00:14:00.280 --> 00:14:04.270
that are right now building technology and algorithms?
346
00:14:04.270 --> 00:14:08.910
Like what do they do in this early stages
347
00:14:08.910 --> 00:14:11.690
of having this dialogue?
348
00:14:11.690 --> 00:14:13.711
Yeah, that's a tough question.
349
00:14:13.711 --> 00:14:16.270
Of course it depends on their role.
350
00:14:16.270 --> 00:14:17.870
So you can see how the incentives are very different
351
00:14:17.870 --> 00:14:20.010
for employees versus, you know,
352
00:14:20.010 --> 00:14:23.330
executives, versus shareholders, or board members.
353
00:14:23.330 --> 00:14:25.380
So thinking about those incentives is important
354
00:14:25.380 --> 00:14:28.010
in terms of framing the way to approach this,
355
00:14:28.010 --> 00:14:30.652
that being said, there are a lot of resources now
356
00:14:30.652 --> 00:14:32.959
and there's a lot available in terms of self-education.
357
00:14:32.959 --> 00:14:35.380
And so I don't really think there's an excuse
358
00:14:35.380 --> 00:14:38.000
at this point to not really understand
359
00:14:38.000 --> 00:14:41.070
the pillars of the conversation, the core texts,
360
00:14:41.070 --> 00:14:42.658
the core materials, the core videos,
361
00:14:42.658 --> 00:14:45.080
some of the principles that we talked about before.
362
00:14:45.080 --> 00:14:47.997
I think there's so much available by way of research
363
00:14:47.997 --> 00:14:50.742
and tools and materials to understand
364
00:14:50.742 --> 00:14:54.790
what's at stake, that to not think about one's work
365
00:14:54.790 --> 00:14:58.010
in that context feels more than negligent at this point.
366
00:14:58.010 --> 00:14:59.178
It almost feels reckless in some ways.
367
00:14:59.178 --> 00:15:02.715
Nevertheless, I think the importance is to
368
00:15:02.715 --> 00:15:05.534
contextualize your work, to take a step back.
369
00:15:05.534 --> 00:15:07.265
This is really hard for corporations,
370
00:15:07.265 --> 00:15:09.155
especially ones with shareholders.
371
00:15:09.155 --> 00:15:13.270
So we can understand that, we can hold both as true
372
00:15:13.270 --> 00:15:17.415
at the same time, and think about taking it upon yourself.
373
00:15:17.415 --> 00:15:20.710
You know, there are more formal means of education
374
00:15:20.710 --> 00:15:23.310
so one of the things that we are doing, the lab of course
375
00:15:23.310 --> 00:15:26.570
is trying to develop a very tangible curriculum
376
00:15:26.570 --> 00:15:28.981
for exactly the stakeholders that you mentioned,
377
00:15:28.981 --> 00:15:31.200
and with the specific idea
378
00:15:31.200 --> 00:15:32.960
to take some of the core scholarship
379
00:15:32.960 --> 00:15:34.880
and translate it into practice,
380
00:15:34.880 --> 00:15:36.590
that would become a useful tool as well.
381
00:15:36.590 --> 00:15:38.220
But at the end of the day, I think it's a matter
382
00:15:38.220 --> 00:15:41.620
of perspective and accepting responsibility for the fact
383
00:15:41.620 --> 00:15:43.740
that no one person can solve this,
384
00:15:43.740 --> 00:15:45.520
at the same time, we can't solve this
385
00:15:45.520 --> 00:15:49.710
unless everyone sort of acknowledges that they play a part.
386
00:15:49.710 --> 00:15:51.730
And that ties into the things your lab
387
00:15:51.730 --> 00:15:52.673
is doing because you know,
388
00:15:52.673 --> 00:15:55.670
I think the idea of everybody learning a lot
389
00:15:55.670 --> 00:15:58.390
about ethics kind of makes sense at one level,
390
00:15:58.390 --> 00:16:00.830
on the other hand, we also know we've seen with privacy
391
00:16:00.830 --> 00:16:04.590
that people are lazy and we are all somewhat lazy,
392
00:16:04.590 --> 00:16:06.550
we'll trade short term for long term.
393
00:16:06.550 --> 00:16:10.060
And it seems like some of what your lab is trying to set up
394
00:16:10.060 --> 00:16:11.595
is making that infrastructure available
395
00:16:11.595 --> 00:16:14.745
to reduce the cost, to make it easier for practitioners
396
00:16:14.745 --> 00:16:17.787
to get access to those sorts of tools.
397
00:16:17.787 --> 00:16:19.320
Yeah and I think education
398
00:16:19.320 --> 00:16:21.080
is not a substitute for regulation.
399
00:16:21.080 --> 00:16:23.090
So I think ultimately
400
00:16:23.090 --> 00:16:25.740
it's not on individuals, it's not on consumers.
401
00:16:25.740 --> 00:16:27.060
My remarks should be taken
402
00:16:27.060 --> 00:16:29.630
as saying that the responsibility to really, you know
403
00:16:29.630 --> 00:16:33.250
reduce the mitigate harms is on individuals entirely.
404
00:16:33.250 --> 00:16:36.240
I think the point is that we just have to be careful
405
00:16:36.240 --> 00:16:37.880
that we don't wait for regulation.
406
00:16:37.880 --> 00:16:39.630
One of the things that I particularly like
407
00:16:39.630 --> 00:16:41.550
about the technology ethics space
408
00:16:41.550 --> 00:16:44.320
is that it takes away the excuse to not think
409
00:16:44.320 --> 00:16:47.050
about these things before we're forced to, right?
410
00:16:47.050 --> 00:16:48.630
So I think in the past
411
00:16:48.630 --> 00:16:50.250
there's sort of been this luxury in tech
412
00:16:50.250 --> 00:16:53.340
of waiting to be forced into taking decisions
413
00:16:53.340 --> 00:16:56.742
or making trade-offs, or confronting issues.
414
00:16:56.742 --> 00:16:59.298
Now, I would say with tech ethics,
415
00:16:59.298 --> 00:17:01.270
you can't really do that anymore.
416
00:17:01.270 --> 00:17:02.640
I think the zeitgeist has changed,
417
00:17:02.640 --> 00:17:03.950
the market has changed.
418
00:17:03.950 --> 00:17:06.962
Things are so far from perfect, things are far from good
419
00:17:06.962 --> 00:17:09.610
but at least in that regard,
420
00:17:09.610 --> 00:17:10.800
you can't hide from this.
421
00:17:10.800 --> 00:17:12.687
I think in that way
422
00:17:12.687 --> 00:17:14.660
they're at least somewhat better than they were.
423
00:17:14.660 --> 00:17:15.493
I also feel like
424
00:17:15.493 --> 00:17:19.263
part of that is many organizations, to Elizabeth's point,
425
00:17:19.263 --> 00:17:22.010
not only they don't have the dialogue,
426
00:17:22.010 --> 00:17:24.000
even if they did, they don't have
427
00:17:24.000 --> 00:17:28.182
the necessary infrastructure, or investments,
428
00:17:28.182 --> 00:17:31.300
or incentives to actually have those conversations.
429
00:17:31.300 --> 00:17:34.490
And so I think I go back to your earlier point, Elizabeth,
430
00:17:34.490 --> 00:17:36.840
that is like, you know, we have to have the right incentives
431
00:17:36.840 --> 00:17:39.150
and organizations have to have with or without
432
00:17:39.150 --> 00:17:42.255
the regulation have the investment and the incentives
433
00:17:42.255 --> 00:17:46.440
to actually put in place the tools and resources
434
00:17:46.440 --> 00:17:49.890
to have these conversations and make an impact.
435
00:17:49.890 --> 00:17:51.440
You have to also align the incentives.
436
00:17:51.440 --> 00:17:52.330
Some of these companies,
437
00:17:52.330 --> 00:17:54.417
I think actually want to do the right thing,
438
00:17:54.417 --> 00:17:56.853
but again, they're sort of beholden to quarterly reports
439
00:17:56.853 --> 00:17:59.490
and shareholders and resolutions
440
00:17:59.490 --> 00:18:01.210
and they need the incentives,
441
00:18:01.210 --> 00:18:03.630
they need the backing from the outside to be able to
442
00:18:03.630 --> 00:18:05.170
do what it is that is probably
443
00:18:05.170 --> 00:18:07.100
in their longer-term interest.
444
00:18:07.100 --> 00:18:08.874
You mentioned incentives a few times.
445
00:18:08.874 --> 00:18:11.970
Can we get some specifics for things that we could do
446
00:18:11.970 --> 00:18:15.330
around that to help align those incentives better?
447
00:18:15.330 --> 00:18:16.550
What would do it?
448
00:18:16.550 --> 00:18:18.570
I think the sort of process oriented
449
00:18:18.570 --> 00:18:19.920
regulations make sense, right?
450
00:18:19.920 --> 00:18:22.410
So what incentive does the company have right now
451
00:18:22.410 --> 00:18:23.800
to audit its algorithms
452
00:18:23.800 --> 00:18:25.973
and then be transparent about the results?
453
00:18:25.973 --> 00:18:28.370
None, they might actually want to know that,
454
00:18:28.370 --> 00:18:30.620
they might actually want an independent third-party audit
455
00:18:30.620 --> 00:18:33.250
that might actually be helpful from a risk standpoint.
456
00:18:33.250 --> 00:18:36.210
If you have a law that says you have to do it,
457
00:18:36.210 --> 00:18:38.010
most companies will probably do it.
458
00:18:38.010 --> 00:18:40.182
So I think those types of, you know,
459
00:18:40.182 --> 00:18:41.540
they're not even nudges.
460
00:18:41.540 --> 00:18:44.540
I mean they're clear interventions, are really useful.
461
00:18:44.540 --> 00:18:46.540
I think the same is true of
462
00:18:46.540 --> 00:18:48.932
things like board expertise and composition.
463
00:18:48.932 --> 00:18:50.415
And we may want to think about
464
00:18:50.415 --> 00:18:53.600
is it useful to have super class share structures
465
00:18:53.600 --> 00:18:57.240
in Silicon Valley where basically no one has
466
00:18:57.240 --> 00:18:59.153
any control over the company's destiny apart from,
467
00:18:59.153 --> 00:19:01.267
you know, one or two people.
468
00:19:01.267 --> 00:19:03.530
So I think these are all again,
469
00:19:03.530 --> 00:19:06.391
common interventions in other sectors and other industries.
470
00:19:06.391 --> 00:19:08.690
And the problem is that this sort of
471
00:19:08.690 --> 00:19:11.900
technology exceptionalism was problematic before,
472
00:19:11.900 --> 00:19:13.972
but now when every company is a tech company,
473
00:19:13.972 --> 00:19:15.775
the problem is just metastasized
474
00:19:15.775 --> 00:19:18.880
to a completely different scale.
475
00:19:18.880 --> 00:19:20.620
The analogy I think about is food.
476
00:19:20.620 --> 00:19:23.430
I mean, anything that sells food now we want them
477
00:19:23.430 --> 00:19:25.120
to follow food regulations.
478
00:19:25.120 --> 00:19:27.050
That certainly wasn't the case a hundred years ago
479
00:19:27.050 --> 00:19:29.690
when Upton Sinclair had the jungle.
480
00:19:29.690 --> 00:19:31.560
I mean, it took that to bring that sort
481
00:19:31.560 --> 00:19:34.307
of transparency and scrutiny to food related processes
482
00:19:34.307 --> 00:19:37.660
but we don't make exceptions now for, oh well
483
00:19:37.660 --> 00:19:40.200
you know, you're just feeding a hundred people.
484
00:19:40.200 --> 00:19:42.233
We're not going to force you
485
00:19:42.233 --> 00:19:43.758
to comply with health regulations.
486
00:19:43.758 --> 00:19:45.130
Exactly.
487
00:19:45.130 --> 00:19:47.482
Yeah, I think that's actually
488
00:19:47.482 --> 00:19:48.315
a very good analogy Sam,
489
00:19:48.315 --> 00:19:51.004
because as I was thinking about what Elizabeth was saying,
490
00:19:51.004 --> 00:19:55.180
my mind went into just also ignorance.
491
00:19:55.180 --> 00:19:57.470
I mean, I think many users
492
00:19:57.470 --> 00:20:00.575
of a lot of these technologies, highly, highly
493
00:20:00.575 --> 00:20:02.590
you know, senior people, highly,
494
00:20:02.590 --> 00:20:05.527
highly educated people may not even be aware
495
00:20:05.527 --> 00:20:10.527
of what the outputs are or what the interim outputs are
496
00:20:10.990 --> 00:20:13.525
or how they come about, or like what all
497
00:20:13.525 --> 00:20:17.700
of the hundreds and thousands of features that give rise
498
00:20:17.700 --> 00:20:20.660
to what the algorithm is doing is actually doing.
499
00:20:20.660 --> 00:20:23.530
And so it's a little bit like the ingredients
500
00:20:23.530 --> 00:20:25.310
in food where we had no idea
501
00:20:25.310 --> 00:20:28.193
some things are bad for us and some things would kill us
502
00:20:28.193 --> 00:20:30.670
and some things that we thought were better for us
503
00:20:30.670 --> 00:20:33.310
than the other bad thing are actually worse for us.
504
00:20:33.310 --> 00:20:37.140
So I think all of that, it's bringing some light
505
00:20:37.140 --> 00:20:41.805
into that education as well as regulation and incentives.
506
00:20:41.805 --> 00:20:43.490
The point is we acted
507
00:20:43.490 --> 00:20:45.760
before we had perfect information and knowledge.
508
00:20:45.760 --> 00:20:47.895
And I think there's a tendency in this space to say,
509
00:20:47.895 --> 00:20:49.170
we can't do anything,
510
00:20:49.170 --> 00:20:52.147
we can't intervene until we had to know exactly
511
00:20:52.147 --> 00:20:53.290
what this tech is,
512
00:20:53.290 --> 00:20:54.930
what the innovation looks like.
513
00:20:54.930 --> 00:20:56.330
You know, we got food wrong, right?
514
00:20:56.330 --> 00:20:58.185
We had the wrong dietary guidelines.
515
00:20:58.185 --> 00:20:59.610
We readjusted them.
516
00:20:59.610 --> 00:21:01.020
We came back to the drawing board
517
00:21:01.020 --> 00:21:03.380
and we recalibrated, the American diet looks different
518
00:21:03.380 --> 00:21:05.208
I mean it's still atrocious but can rethink it.
519
00:21:05.208 --> 00:21:08.573
But we keep revisiting it, which is your point.
520
00:21:08.573 --> 00:21:09.840
We keep revisiting and we reiterate.
521
00:21:09.840 --> 00:21:11.300
And so that's exactly what we need to do
522
00:21:11.300 --> 00:21:13.300
in this space and say, based on what we know now
523
00:21:13.300 --> 00:21:14.310
and that's science, right?
524
00:21:14.310 --> 00:21:16.553
Fundamentally science is sort of the consensus we have
525
00:21:16.553 --> 00:21:17.740
at a given time.
526
00:21:17.740 --> 00:21:18.800
It doesn't mean it's perfect.
527
00:21:18.800 --> 00:21:20.870
It doesn't mean it won't change, but it means
528
00:21:20.870 --> 00:21:22.377
that we don't get paralyzed, but we act
529
00:21:22.377 --> 00:21:25.110
with the best knowledge that we have and the humility
530
00:21:25.110 --> 00:21:28.080
that we'll probably have to change this or look at it again.
531
00:21:28.080 --> 00:21:30.510
So, you know, the same thing happened with the pandemic
532
00:21:30.510 --> 00:21:33.940
where we have the WHO saying that masks weren't effective
533
00:21:33.940 --> 00:21:35.320
and then changing course.
534
00:21:35.320 --> 00:21:38.950
But we respect that process because there's the humility
535
00:21:38.950 --> 00:21:40.740
and the transparency to say
536
00:21:40.740 --> 00:21:42.905
that this is how we're going to operate collectively
537
00:21:42.905 --> 00:21:46.900
because we can't afford to just do nothing.
538
00:21:46.900 --> 00:21:49.090
And I think that's where we are right now.
539
00:21:49.090 --> 00:21:49.923
Very well said.
540
00:21:49.923 --> 00:21:50.800
Well,
541
00:21:50.800 --> 00:21:53.170
I really like how you illustrate all these benefits
542
00:21:53.170 --> 00:21:56.880
and how you make that a concrete thing for people.
543
00:21:56.880 --> 00:21:59.250
And I hope that the lab takes off and does well
544
00:21:59.250 --> 00:22:01.190
and makes some progress and provides some infrastructure
545
00:22:01.190 --> 00:22:02.950
for people to make it easier for that.
546
00:22:02.950 --> 00:22:04.800
Thank you for taking the time to talk with us today.
547
00:22:04.800 --> 00:22:06.120
Yeah, thank you so much.
548
00:22:06.120 --> 00:22:07.190
Thanks so much for having me.
549
00:22:07.190 --> 00:22:08.788
This was great.
550
00:22:08.788 --> 00:22:11.205
(calm music)
551
00:22:14.337 --> 00:22:15.720
Shervin, Elizabeth had a lot of good points
552
00:22:15.720 --> 00:22:17.841
about getting started now.
553
00:22:17.841 --> 00:22:18.710
What struck you as interesting
554
00:22:18.710 --> 00:22:22.610
or what struck you as a way that companies could start now?
555
00:22:22.610 --> 00:22:25.210
I think the most striking thing she said,
556
00:22:25.210 --> 00:22:27.570
I mean she said a lot of very, very insightful things
557
00:22:27.570 --> 00:22:31.820
but in terms of how to get going, she made it very simple.
558
00:22:31.820 --> 00:22:33.559
She said, look, this is ultimately about values.
559
00:22:33.559 --> 00:22:35.811
And if it's something you care about
560
00:22:35.811 --> 00:22:38.788
and we know many, many organizations and many,
561
00:22:38.788 --> 00:22:42.330
many people and many very senior people
562
00:22:42.330 --> 00:22:44.340
and powerful people do care about it.
563
00:22:44.340 --> 00:22:46.580
If you care about it, then do something about it.
564
00:22:46.580 --> 00:22:48.360
But the striking thing she said is
565
00:22:48.360 --> 00:22:50.390
that you have to have the right people
566
00:22:50.390 --> 00:22:53.216
at the table and you have to start having the conversations.
567
00:22:53.216 --> 00:22:56.400
And as you said, Sam, this is a business problem.
568
00:22:56.400 --> 00:22:57.760
That's a very managerial thing.
569
00:22:57.760 --> 00:22:59.040
Yeah, it's a managerial thing,
570
00:22:59.040 --> 00:23:02.590
it's about allocation of resources to solve a problem.
571
00:23:02.590 --> 00:23:04.693
And it is a fact that some organizations do allocate
572
00:23:04.693 --> 00:23:08.738
resources on responsible AI and governance,
573
00:23:08.738 --> 00:23:13.380
AI governance and ethical AI, and some organizations don't.
574
00:23:13.380 --> 00:23:15.960
And so I think that's the key lesson from here
575
00:23:15.960 --> 00:23:17.310
is that if you care about it
576
00:23:17.310 --> 00:23:19.320
you don't have to wait for all the regulation
577
00:23:19.320 --> 00:23:20.620
to settle down.
578
00:23:20.620 --> 00:23:22.780
I liked her point about revisiting it as well.
579
00:23:22.780 --> 00:23:26.120
And that comes with the idea of not starting perfectly
580
00:23:26.120 --> 00:23:28.198
just plan to come back to it, plan to revisit it
581
00:23:28.198 --> 00:23:31.620
because these things, even as you said, Shervin
582
00:23:31.620 --> 00:23:34.580
if you got it perfect, technology would change on us.
583
00:23:34.580 --> 00:23:36.010
Exactly and you would never know
584
00:23:36.010 --> 00:23:37.462
you got it perfect.
585
00:23:37.462 --> 00:23:38.295
You never know you got it perfect.
586
00:23:38.295 --> 00:23:41.380
Yeah, the perfection would be lost in immortality, so.
587
00:23:41.380 --> 00:23:42.900
I'm still trying to figure out if coffee
588
00:23:42.900 --> 00:23:45.260
is good for your heart or bad for your heart
589
00:23:45.260 --> 00:23:48.900
because it's gone from good to bad many times.
590
00:23:48.900 --> 00:23:50.460
Well, I mean, and I think that's, you know,
591
00:23:50.460 --> 00:23:52.350
some of what people face with a complex problem.
592
00:23:52.350 --> 00:23:54.480
I mean, if this was an easy problem,
593
00:23:54.480 --> 00:23:56.061
we wouldn't be having this conversation.
594
00:23:56.061 --> 00:23:58.350
If there were simple solutions, you know,
595
00:23:58.350 --> 00:24:00.540
if people are tuning in to say, alright,
596
00:24:00.540 --> 00:24:02.256
here are the four things that I need to do to solve
597
00:24:02.256 --> 00:24:04.828
ethical problems with artificial intelligence,
598
00:24:04.828 --> 00:24:08.130
we're not going to be able to offer that.
599
00:24:08.130 --> 00:24:09.650
We're not quite that Buzzfeed level
600
00:24:09.650 --> 00:24:12.600
of being able to say, here's what we can do
601
00:24:12.600 --> 00:24:13.456
because it's hard.
602
00:24:13.456 --> 00:24:15.400
The other thing that struck me is that,
603
00:24:15.400 --> 00:24:18.566
you know she has a lot of education and passion
604
00:24:18.566 --> 00:24:20.876
in this space that I think is actually quite contagious
605
00:24:20.876 --> 00:24:24.100
because I think that's exactly the mentality
606
00:24:24.100 --> 00:24:26.406
and the attitude that many organizations
607
00:24:26.406 --> 00:24:30.390
can start to be inspired by and adopt
608
00:24:30.390 --> 00:24:32.528
to start moving in the right direction
609
00:24:32.528 --> 00:24:34.820
rather than waiting for government
610
00:24:34.820 --> 00:24:37.030
or regulation to solve this problem.
611
00:24:37.030 --> 00:24:41.507
We can all take a role in becoming more responsible
612
00:24:41.507 --> 00:24:45.650
and more ethical with AI starting now.
613
00:24:45.650 --> 00:24:48.240
We already have the right values
614
00:24:48.240 --> 00:24:50.300
and we already know what's important.
615
00:24:50.300 --> 00:24:51.760
Nothing is really stopping us
616
00:24:51.760 --> 00:24:54.700
from having those dialogues and making those changes.
617
00:24:54.700 --> 00:24:56.030
(calm music)
618
00:24:56.030 --> 00:24:57.630
Thanks for listening to season two
619
00:24:57.630 --> 00:24:59.440
of Me, Myself, and AI.
620
00:24:59.440 --> 00:25:00.410
We'll be back in the Fall
621
00:25:00.410 --> 00:25:02.420
with new episodes for season three.
622
00:25:02.420 --> 00:25:03.340
And in the meantime,
623
00:25:03.340 --> 00:25:06.140
we're dropping a bonus episode on July 13th.
624
00:25:06.140 --> 00:25:08.250
Join us as we interview Dave Johnson,
625
00:25:08.250 --> 00:25:10.410
Chief Data and Artificial Intelligence Officer
626
00:25:10.410 --> 00:25:12.269
at Moderna, about how the company used AI
627
00:25:12.269 --> 00:25:16.034
to accelerate its development of the COVID-19 vaccine.
628
00:25:16.034 --> 00:25:19.130
In the meantime, to continue the conversation with us,
629
00:25:19.130 --> 00:25:21.440
you can find us in a special LinkedIn group created
630
00:25:21.440 --> 00:25:23.100
for listeners just like you.
631
00:25:23.100 --> 00:25:25.270
It's called AI for Leaders.
632
00:25:25.270 --> 00:25:27.170
We'll put a link to it in the show notes
633
00:25:27.170 --> 00:25:30.610
or you can visit MITSMR.com/aiforleaders
634
00:25:30.610 --> 00:25:33.830
to be redirected to the LinkedIn page.
635
00:25:33.830 --> 00:25:35.910
Request to join and as soon as you do,
636
00:25:35.910 --> 00:25:37.890
you'll be able to catch up on back episodes
637
00:25:37.890 --> 00:25:39.490
of Me, Myself, and AI,
638
00:25:39.490 --> 00:25:41.198
talk with the show creators and hosts,
639
00:25:41.198 --> 00:25:42.930
meet some of the guests,
640
00:25:42.930 --> 00:25:45.380
and share other resources that help business leaders
641
00:25:45.380 --> 00:25:47.200
stay on top of all things AI.
642
00:25:47.200 --> 00:25:48.371
Talk to you soon.
643
00:25:48.371 --> 00:25:50.788
(calm music)