WEBVTT 1 00:00:00.510 --> 00:00:02.580 So everyone across your company 2 00:00:02.580 --> 00:00:07.140 is probably excited to experiment with generative AI. 3 00:00:07.140 --> 00:00:10.560 And we are seeing CEOs acting fast 4 00:00:10.560 --> 00:00:14.460 to protect their companies from the business risks 5 00:00:14.460 --> 00:00:16.470 of these experiments. 6 00:00:16.470 --> 00:00:21.240 As everyone knows by now, generative AI, 7 00:00:21.240 --> 00:00:23.190 when used without effective guidelines 8 00:00:23.190 --> 00:00:25.230 and mitigation strategies, 9 00:00:25.230 --> 00:00:29.520 puts your company at risk of copyright infringement, 10 00:00:29.520 --> 00:00:31.350 leaks of proprietary data, 11 00:00:31.350 --> 00:00:35.133 lower credibility from low quality content and more. 12 00:00:36.210 --> 00:00:40.320 Many of these risks are relatively easy to mitigate 13 00:00:40.320 --> 00:00:44.160 by setting clear guidelines around use of generative AI 14 00:00:44.160 --> 00:00:46.260 and enacting responsible norms, 15 00:00:46.260 --> 00:00:51.000 such as setting up review boards for content created 16 00:00:51.000 --> 00:00:52.700 with the support of generative AI. 17 00:00:53.670 --> 00:00:55.590 On top of it and thankfully, 18 00:00:55.590 --> 00:00:58.860 LLM [Large Language Model] providers are also working hard 19 00:00:58.860 --> 00:01:01.320 to build protections into the models 20 00:01:01.320 --> 00:01:04.170 to mitigate some of the more well-known risks, 21 00:01:04.170 --> 00:01:06.660 such as the copyright infringements, 22 00:01:06.660 --> 00:01:09.360 the lack of truth and biased outputs. 23 00:01:09.360 --> 00:01:13.110 And of course, as these models will get better, 24 00:01:13.110 --> 00:01:16.500 guidelines restricting usage of generative AI 25 00:01:16.500 --> 00:01:21.000 can be pulled back, but it will take time. 26 00:01:21.000 --> 00:01:24.900 CEOs should start with reducing "shadow AI" 27 00:01:24.900 --> 00:01:27.600 by enforcing policies that restrict 28 00:01:27.600 --> 00:01:30.990 and sanction use of tools like ChatGPT. 29 00:01:30.990 --> 00:01:34.380 Then, they should work with their tech teams 30 00:01:34.380 --> 00:01:39.150 to create protections against leaks of sensitive data, 31 00:01:39.150 --> 00:01:42.690 such as sanitizing all data used with generative AI 32 00:01:42.690 --> 00:01:44.790 during these experiments. 33 00:01:44.790 --> 00:01:48.930 And finally, consider setting up a "red team" 34 00:01:48.930 --> 00:01:52.140 to deliberately find failure modes 35 00:01:52.140 --> 00:01:54.300 and vulnerabilities to the business 36 00:01:54.300 --> 00:01:56.130 because of generative AI, 37 00:01:56.130 --> 00:01:59.880 such as unexpected functionality of applications 38 00:01:59.880 --> 00:02:03.153 and enhanced capabilities of fraud and phishing schemes.