WEBVTT 00:00:00.000 --> 00:00:03.680 GenAI is a probability engine. If you're not the probable answer, 00:00:03.680 --> 00:00:06.320 then you're probably not the possible answer either. 00:00:07.680 --> 00:00:09.840 We're joined by Rebecca from The Brandtech Group, 00:00:09.840 --> 00:00:10.600 welcome. 00:00:10.600 --> 00:00:12.000 Thank you so much. Rebecca. 00:00:12.320 --> 00:00:17.160 Bias in AI has been a problem for marketing leaders especially ever since 00:00:17.160 --> 00:00:18.080 its inception. 00:00:18.600 --> 00:00:21.720 Why is it proving to be such a tough problem to solve even now, 00:00:21.720 --> 00:00:22.560 so many years on? 00:00:23.520 --> 00:00:25.760 I think the challenge is the opacity. 00:00:25.800 --> 00:00:28.800 So, for most marketing leaders, they're not building AI, 00:00:28.800 --> 00:00:30.000 they're leveraging AI. 00:00:30.720 --> 00:00:34.840 And so the people building AI, the responsibility can sit with them in 00:00:34.840 --> 00:00:36.960 some way to kind of find a solution. 00:00:37.560 --> 00:00:40.160 And it's really baked into the very fabric of the industry. 00:00:40.160 --> 00:00:45.120 So, you may not know this, but almost 92% of AI software developers 00:00:45.120 --> 00:00:46.400 identify as male. 00:00:46.600 --> 00:00:49.640 Only 1% of investment went to female founders in this space. 00:00:49.920 --> 00:00:53.360 So I just think if probability is not on your side, you know, 00:00:53.360 --> 00:00:57.040 GenAI is a probability engine. If you're not the probable answer, 00:00:57.040 --> 00:01:01.120 then you're probably not the possible answer either because nobody who's 00:01:01.120 --> 00:01:05.640 building AI is focused on making kind of fringe representation part of the core. 00:01:06.000 --> 00:01:07.080 And that's really the challenge. 00:01:07.320 --> 00:01:11.600 What sort of strategies are you seeing brands or platforms deploy in order to 00:01:11.600 --> 00:01:13.120 try and mitigate that bias? 00:01:13.120 --> 00:01:16.920 What's working? At the moment, I would say it's largely avoidance. 00:01:16.960 --> 00:01:20.520 So lots of brands we're seeing have a strategy that says we won't generate 00:01:20.520 --> 00:01:24.480 synthetic humans or we will be very careful about a representation of real people. 00:01:24.480 --> 00:01:26.360 They'll stick to low-risk use cases. 00:01:26.600 --> 00:01:27.800 That is really loosening now. 00:01:27.800 --> 00:01:31.320 I think the evolution of the frontier video models means that you really have 00:01:31.320 --> 00:01:34.760 to find a new position that you're comfortable with ethically for your kind 00:01:34.760 --> 00:01:36.000 of responsible AI practice. 00:01:36.600 --> 00:01:40.200 And then when it comes to then really considering the inclusion of humans, 00:01:40.200 --> 00:01:42.520 then you need a technical level of mitigation. 00:01:42.520 --> 00:01:44.440 You need something baked into the technology. 00:01:44.440 --> 00:01:48.720 You can't just put that responsibility at the door of prompters to always remember 00:01:48.720 --> 00:01:52.320 to be conscious prompting, to have a full grasp of inclusivity every 00:01:52.320 --> 00:01:53.600 time they they generate. 00:01:54.080 --> 00:01:58.040 And what would your advice be to marketing leaders who are trying to scale 00:01:58.040 --> 00:02:01.320 GenAI, but, you know, trying to do it responsibly and yet not 00:02:01.320 --> 00:02:03.000 trying to slow down innovation? 00:02:03.520 --> 00:02:06.040 Yeah, that's where I think you want to find a 00:02:06.040 --> 00:02:10.440 great technical mitigation and then deploy it at scale inside of your bussiness. 00:02:10.640 --> 00:02:14.200 So the way that we've tried to kind of overcome this challenge is we've 00:02:14.200 --> 00:02:16.080 prototyped a tool called Bias Breaker. 00:02:16.560 --> 00:02:20.320 So it's a level of technical mitigation that sits on top of any image or video 00:02:20.320 --> 00:02:21.160 generation model. 00:02:21.360 --> 00:02:24.560 We configured . . . starting with six dice for the six most 00:02:24.560 --> 00:02:28.920 common forms of inclusivity: gender, age, ethnicity, religion, disability, et cetera. 00:02:29.680 --> 00:02:32.200 And when you enter a simple prompt, we roll those dice, 00:02:32.200 --> 00:02:34.560 we give you up to two forms of inclusivity back in, 00:02:34.560 --> 00:02:37.640 woven into your prompt--more sophisticated prompt that you can then 00:02:37.640 --> 00:02:38.840 use in any of those models. 00:02:38.880 --> 00:02:43.240 And I think just being able to bring that level of randomized representation 00:02:43.240 --> 00:02:47.720 without having to bring your own sort of tokenistic or unconscious bias to the 00:02:47.720 --> 00:02:52.400 process is a really powerful way to bake the solution into the technology itself. 00:02:52.720 --> 00:02:53.640 Hugely promising. 00:02:53.640 --> 00:02:56.000 Rebecca, thanks so much for joining us here in Cannes. 00:02:56.000 --> 00:02:56.800 My absolute pleasure.