WEBVTT 00:00:00.120 --> 00:00:01.960 Matt, thank you so much for joining us. 00:00:01.960 --> 00:00:05.160 So tell us a little bit about the work that Clarifai is doing. 00:00:05.280 --> 00:00:06.360 Sure, thanks for having me. 00:00:06.360 --> 00:00:09.720 So Clarifai is really the best way to think about Clarifai is 00:00:09.720 --> 00:00:11.200 as a full stack AI platform. 00:00:11.440 --> 00:00:14.920 So we have all the tools in one place for you to take your idea 00:00:14.920 --> 00:00:16.880 and get it into production with AI. 00:00:17.280 --> 00:00:20.400 And so that means tools for data labeling, which is always the 00:00:20.400 --> 00:00:23.080 first step in trying to customize the AI to recognize 00:00:23.080 --> 00:00:26.080 what you care about, managing that data, training models to 00:00:26.080 --> 00:00:28.280 then recognize how you care about your data. 00:00:28.640 --> 00:00:30.880 And then you really want to evaluate them to see how 00:00:30.880 --> 00:00:31.640 accurate they are. 00:00:31.640 --> 00:00:34.960 And then all the production scale inference you need to take 00:00:34.960 --> 00:00:37.960 that into production and get feedback from your users, 00:00:37.960 --> 00:00:40.040 interacting with the AI in real time. 00:00:40.400 --> 00:00:43.680 And just today actually, we launched a new capability we're 00:00:43.680 --> 00:00:47.120 calling compute orchestration, and it takes all those workloads 00:00:47.120 --> 00:00:50.680 and lets you run them anywhere, on any cloud, your own cloud, or 00:00:50.680 --> 00:00:51.520 even on-premise. 00:00:52.360 --> 00:00:55.960 So what challenges do companies face when taking AI from 00:00:55.960 --> 00:00:57.440 prototype to production? 00:00:58.120 --> 00:00:59.000 Yeah, there's quite a few. 00:00:59.560 --> 00:01:03.240 Some that come to mind are people kind of overestimate how 00:01:03.240 --> 00:01:07.000 simple it is when they see on social media, cool, you know, 00:01:07.000 --> 00:01:09.160 hack around, maybe OpenAI's model. 00:01:09.320 --> 00:01:11.400 Ever since ChatGPT has been a common trend. 00:01:11.640 --> 00:01:14.760 And you can prompt these models to do interesting things really easily. 00:01:15.000 --> 00:01:17.080 But then you have to think about, are you going to trust 00:01:17.080 --> 00:01:18.080 that in front of your users? 00:01:18.120 --> 00:01:20.960 Is it reliably giving you back that same result that happened 00:01:20.960 --> 00:01:23.720 one time and you posted it on social media and it's going to 00:01:23.720 --> 00:01:24.680 happen over and over? 00:01:24.680 --> 00:01:26.560 Or is it going to go off the rails and tell your customers 00:01:26.560 --> 00:01:27.800 something you don't want them to hear? 00:01:28.160 --> 00:01:30.360 Are you going to be able to run that cost effectively? 00:01:30.720 --> 00:01:32.960 That's a huge concern with these large models. 00:01:33.760 --> 00:01:36.160 And that's one of the reasons why we have this compute 00:01:36.160 --> 00:01:38.440 orchestration, to let you run on efficient compute. 00:01:39.280 --> 00:01:42.400 And then just thinking about how you're going to go into day two 00:01:42.400 --> 00:01:44.880 operations. Getting to production is one thing, but 00:01:44.880 --> 00:01:47.040 then how are you going to maintain that with 00:01:47.040 --> 00:01:49.760 state-of-the-art, which is evolving at such a rapid rate 00:01:49.760 --> 00:01:51.760 right now with all the innovations in AI? 00:01:52.280 --> 00:01:54.800 So there's lots of challenges going from that first idea all 00:01:54.800 --> 00:01:55.920 the way through production. 00:01:56.320 --> 00:01:58.040 And just to jump off that really quick. 00:01:58.120 --> 00:02:00.720 So how do companies future proof their projects? 00:02:01.160 --> 00:02:04.800 Yeah, we see often people are, you know, building the tools to 00:02:04.800 --> 00:02:08.040 build AI rather than just focusing on customizing the AI 00:02:08.040 --> 00:02:08.960 for their needs. 00:02:09.280 --> 00:02:11.560 And so they're building infrastructure tools, they're 00:02:11.560 --> 00:02:13.560 building data-labeling tools, training models. 00:02:13.560 --> 00:02:17.160 And it becomes like a whole team just dedicated to trying to stay 00:02:17.160 --> 00:02:19.680 up to speed with state-of-the-art rather than 00:02:19.680 --> 00:02:22.920 actually innovating with AI to differentiate your business. 00:02:23.200 --> 00:02:26.040 So we urge people to think about the long term. 00:02:26.080 --> 00:02:28.920 Do you want to be building AI tools? Or do you want want to be 00:02:28.920 --> 00:02:30.680 differentiating from your competitors? 00:02:31.040 --> 00:02:33.800 So that's one big aspect, just deciding what you want to build 00:02:33.800 --> 00:02:34.600 and what you don't. 00:02:34.920 --> 00:02:38.120 The other is thinking about the compute long term and your data. 00:02:38.360 --> 00:02:42.800 So compute is crucial and doing it in a cost-effective way. 00:02:42.800 --> 00:02:46.280 We see this trend of people purchasing more GPUs to bring 00:02:46.280 --> 00:02:50.040 that compute back on-premise so that the cloud costs are not, 00:02:50.040 --> 00:02:51.280 you know, ballooning. 00:02:51.640 --> 00:02:55.800 And that's I think a good strategy for the long term. 00:02:56.160 --> 00:02:59.160 And to be hardware agnostic, I think, is really, really 00:02:59.160 --> 00:03:00.480 important for the future. 00:03:00.760 --> 00:03:03.480 There's great new types of AI-specific processors that 00:03:03.480 --> 00:03:05.320 people should be paying attention to. 00:03:05.800 --> 00:03:09.320 And then on the data front, data is the lifeblood of AI. 00:03:09.560 --> 00:03:12.360 So the first step is always getting your data organized and 00:03:12.360 --> 00:03:15.080 thinking about not just the quantity of data you have, but 00:03:15.080 --> 00:03:15.920 the quality of it. 00:03:16.160 --> 00:03:18.760 Because the better the data, the better the AI is going to 00:03:18.760 --> 00:03:19.360 ultimately be. 00:03:19.640 --> 00:03:21.080 So it's kind of those big aspects. 00:03:21.080 --> 00:03:23.440 What do you want to build, where you're going to run that stuff, 00:03:23.440 --> 00:03:24.680 and then the data that powers it. 00:03:25.320 --> 00:03:26.240 Thank you so much. 00:03:26.640 --> 00:03:27.040 Thank you.