WEBVTT
1
00:00:00.000 --> 00:00:01.950
Delighted to be here with Sylvain.
2
00:00:01.950 --> 00:00:04.290
Talk about artificial intelligence.
3
00:00:04.290 --> 00:00:07.380
All right, Sylvain, tell me what are the risks
4
00:00:07.380 --> 00:00:09.871
and opportunities that AI creates?
5
00:00:09.871 --> 00:00:14.871
With AI, we have massive opportunities in many fields.
6
00:00:14.970 --> 00:00:18.330
We can use AI to improve the operations of every company,
7
00:00:18.330 --> 00:00:22.080
be more resilient, be better on the CO2 front, for instance.
8
00:00:22.080 --> 00:00:24.840
We bring customer engagement to the next level.
9
00:00:24.840 --> 00:00:27.450
We can improve medical treatments, education,
10
00:00:27.450 --> 00:00:29.172
agriculture, many things.
11
00:00:29.172 --> 00:00:32.100
I also see real risks.
12
00:00:32.100 --> 00:00:34.050
And the big risk is what happens
13
00:00:34.050 --> 00:00:38.430
if some of those algos suddenly run amok, do stupid things.
14
00:00:38.430 --> 00:00:40.596
Nobody's in the loop to sort of control them.
15
00:00:40.596 --> 00:00:44.040
So we'll be deploying AI in more and more fields
16
00:00:44.040 --> 00:00:46.170
in our daily lives, and we need to be more
17
00:00:46.170 --> 00:00:49.316
and more responsible in the way we'll be deploying AI.
18
00:00:49.316 --> 00:00:51.120
So how does that idea
19
00:00:51.120 --> 00:00:53.790
of being responsible with artificial intelligence
20
00:00:53.790 --> 00:00:55.978
fit in with the agendas of very busy CEOs?
21
00:00:55.978 --> 00:01:00.660
Very busy CEOs who need to put that topic
22
00:01:00.660 --> 00:01:02.460
font and center for them.
23
00:01:02.460 --> 00:01:06.000
Because responsible AI is not just about compliance
24
00:01:06.000 --> 00:01:09.060
and complying with regulations, it's much more.
25
00:01:09.060 --> 00:01:12.720
Complying with regulation is obviously double stake.
26
00:01:12.720 --> 00:01:14.880
But basically, companies need to do more.
27
00:01:14.880 --> 00:01:17.580
They need to do their own ways
28
00:01:17.580 --> 00:01:19.920
of making sure they are very responsible.
29
00:01:19.920 --> 00:01:23.250
And we have a survey where most leaders would say,
30
00:01:23.250 --> 00:01:25.455
I do responsible AI because I need trust.
31
00:01:25.455 --> 00:01:28.530
I want to develop my business with customers.
32
00:01:28.530 --> 00:01:29.935
It's not a matter of compliance.
33
00:01:29.935 --> 00:01:33.300
And that's why the role of CEO is so important,
34
00:01:33.300 --> 00:01:36.074
because many companies will take their own choice.
35
00:01:36.074 --> 00:01:40.253
Where do I consider personalization become discrimination?
36
00:01:40.253 --> 00:01:43.500
How do I want people in the loop on critical application
37
00:01:43.500 --> 00:01:44.790
in my own company?
38
00:01:44.790 --> 00:01:47.034
And it will touch the identity of company,
39
00:01:47.034 --> 00:01:50.760
hence CEOs need to be active on this one.
40
00:01:50.760 --> 00:01:52.320
Really interesting.
41
00:01:52.320 --> 00:01:55.034
How do you think organizations can approach this idea
42
00:01:55.034 --> 00:01:57.572
of AI governance?
43
00:01:57.572 --> 00:02:00.060
They need to change the lens,
44
00:02:00.060 --> 00:02:01.890
because today many organization,
45
00:02:01.890 --> 00:02:04.350
once you have responsible AI committee,
46
00:02:04.350 --> 00:02:06.060
people think, OK, we are done.
47
00:02:06.060 --> 00:02:07.110
And in our last survey,
48
00:02:07.110 --> 00:02:09.073
we see that 55% of company leaders
49
00:02:09.073 --> 00:02:11.760
are super complacent with themselves,
50
00:02:11.760 --> 00:02:14.490
and they overestimate how good they do.
51
00:02:14.490 --> 00:02:15.870
It's not just about governance.
52
00:02:15.870 --> 00:02:18.720
It's a full function that needs to be built.
53
00:02:18.720 --> 00:02:21.990
You need teams to figure out how to test algorithms,
54
00:02:21.990 --> 00:02:23.475
how to make sure they are good,
55
00:02:23.475 --> 00:02:26.052
to measure potential deviation over time
56
00:02:26.052 --> 00:02:27.630
around what algos are doing,
57
00:02:27.630 --> 00:02:30.212
who are vetting which tools you are allowed to use or not,
58
00:02:30.212 --> 00:02:32.220
how to engage people,
59
00:02:32.220 --> 00:02:34.590
and how to keep humans in the loop where it's necessary.
60
00:02:34.590 --> 00:02:37.410
So it's not just governance, it's much bigger.
61
00:02:37.410 --> 00:02:38.793
It's a full new function.
62
00:02:38.793 --> 00:02:41.410
And although most companies want to move,
63
00:02:41.410 --> 00:02:43.020
they're not there yet.
64
00:02:43.020 --> 00:02:45.870
Sylvain, thanks very much for stopping into the studio.