WEBVTT 00:00:00.160 --> 00:00:01.720 Anne, thank you so much for being here. 00:00:01.920 --> 00:00:04.840 What are some of the biggest risks organizations face when it 00:00:04.840 --> 00:00:05.840 comes to using GenAI? 00:00:06.880 --> 00:00:07.720 So GenAI 00:00:07.720 --> 00:00:11.960 systems have to be proficient, so they have to bring the 00:00:11.960 --> 00:00:13.960 intended value to the user. 00:00:14.200 --> 00:00:16.360 They have to be safe and equitable. 00:00:16.680 --> 00:00:17.840 They have to be secure. 00:00:18.000 --> 00:00:22.000 And, of course, they have to be compliant with regulation, for 00:00:22.000 --> 00:00:25.200 example, the regulation provided by the EU AI Act. 00:00:25.720 --> 00:00:30.520 And systems that don't, aren't tested across all of these four 00:00:30.520 --> 00:00:34.920 dimensions bring risks to the organizations that they are 00:00:34.920 --> 00:00:36.360 actually employing. 00:00:36.760 --> 00:00:39.560 So to expand on that a little bit, how can these companies 00:00:39.560 --> 00:00:42.560 really make sure their systems are both innovative and reliable? 00:00:42.880 --> 00:00:47.280 So they should really test and evaluate them across these 00:00:47.280 --> 00:00:52.000 dimensions that I mentioned: proficiency, safe and equitable, 00:00:52.000 --> 00:00:52.840 and secure. 00:00:53.240 --> 00:00:56.880 And we, as BCG X, have built an open-source 00:00:56.880 --> 00:01:01.640 solution--ARTKIT--which really helps in in doing that and 00:01:01.640 --> 00:01:06.160 automating that to some extent by, you know, combining 00:01:06.160 --> 00:01:11.600 automation together with human test-and- evaluation capabilities. 00:01:11.920 --> 00:01:16.840 So what can companies do to build and also maintain customer trust? 00:01:17.160 --> 00:01:22.600 For customer trust, I think that transparency is extremely important. 00:01:22.920 --> 00:01:27.880 So customers will need to know: Do I interact with an AI, or do 00:01:27.880 --> 00:01:29.720 I interact with a human? 00:01:29.720 --> 00:01:33.440 And that's, by the way, also something that is necessary from 00:01:33.440 --> 00:01:36.480 a perspective of the EU AI Act, for example, yeah? 00:01:36.520 --> 00:01:39.240 So transparency is key. 00:01:39.440 --> 00:01:43.200 But, then of course, at the same time, it is exactly reiterating 00:01:43.200 --> 00:01:45.120 the points: the systems have to be helpful 00:01:45.480 --> 00:01:48.960 for the customer, they have to be free of hate speech, they 00:01:48.960 --> 00:01:51.800 have to be free of discrimination, and they also 00:01:51.800 --> 00:01:54.800 have to be safeguarding personal data, for example. 00:01:55.280 --> 00:01:56.320 Thank you so much. 00:01:57.040 --> 00:01:57.520 Thank you.