WEBVTT 1 00:00:00.000 --> 00:00:00.960 2 00:00:00.960 --> 00:00:03.970 SAM RANSBOTHAM: AI applications involve many different levels 3 00:00:03.970 --> 00:00:04.840 of risk. 4 00:00:04.840 --> 00:00:08.450 Learn how Stanley Black & Decker considers its AI risk portfolio 5 00:00:08.450 --> 00:00:10.760 across its business when we talk with the company's 6 00:00:10.760 --> 00:00:14.840 first chief technology officer, Mark Maybury. 7 00:00:14.840 --> 00:00:17.520 Welcome to Me, Myself, and AI, a podcast 8 00:00:17.520 --> 00:00:19.660 on artificial intelligence in business. 9 00:00:19.660 --> 00:00:23.410 Each episode, we introduce you to someone innovating with AI. 10 00:00:23.410 --> 00:00:26.740 I'm Sam Ransbotham, professor of information systems 11 00:00:26.740 --> 00:00:28.080 at Boston College. 12 00:00:28.080 --> 00:00:31.440 I'm also the guest editor for the AI and Business Strategy 13 00:00:31.440 --> 00:00:34.930 Big Ideas program at MIT Sloan Management Review. 14 00:00:34.930 --> 00:00:37.610 SHERVIN KHODABANDEH: And I'm Shervin Khodabandeh, 15 00:00:37.610 --> 00:00:41.680 senior partner with BCG, and I colead BCG's AI practice 16 00:00:41.680 --> 00:00:42.840 in North America. 17 00:00:42.840 --> 00:00:46.390 Together, MIT SMR and BCG have been 18 00:00:46.390 --> 00:00:49.810 researching AI for five years, interviewing hundreds 19 00:00:49.810 --> 00:00:52.150 of practitioners and surveying thousands 20 00:00:52.150 --> 00:00:56.260 of companies on what it takes to build and to deploy and scale 21 00:00:56.260 --> 00:00:58.860 AI capabilities across the organization 22 00:00:58.860 --> 00:01:03.232 and really transform the way organizations operate. 23 00:01:03.232 --> 00:01:04.690 SAM RANSBOTHAM: Today we're talking 24 00:01:04.690 --> 00:01:07.050 with Mark Maybury, Stanley Black & Decker's 25 00:01:07.050 --> 00:01:08.413 first chief technology officer. 26 00:01:08.413 --> 00:01:09.580 Mark, thanks for joining us. 27 00:01:09.580 --> 00:01:10.060 Welcome. 28 00:01:10.060 --> 00:01:11.980 MARK MAYBURY: Thank you very much for having me, Sam. 29 00:01:11.980 --> 00:01:13.770 SAM RANSBOTHAM: Why don't we start with your current role. 30 00:01:13.770 --> 00:01:15.637 You're the first chief technology officer 31 00:01:15.637 --> 00:01:16.720 at Stanley Black & Decker. 32 00:01:16.720 --> 00:01:17.610 What does that mean? 33 00:01:17.610 --> 00:01:18.110 34 00:01:18.110 --> 00:01:19.530 MARK MAYBURY: Well, back in 2017, 35 00:01:19.530 --> 00:01:21.130 I was really delighted to be invited 36 00:01:21.130 --> 00:01:23.320 by our chief executive officer, Jim Loree, 37 00:01:23.320 --> 00:01:27.290 to really lead the extreme innovation enterprise 38 00:01:27.290 --> 00:01:28.900 across Stanley Black & Decker. 39 00:01:28.900 --> 00:01:32.450 So I get involved in everything from new ventures 40 00:01:32.450 --> 00:01:37.050 to accelerating new companies, to fostering innovation 41 00:01:37.050 --> 00:01:39.460 within our businesses and just in general 42 00:01:39.460 --> 00:01:42.110 being the champion of extreme innovation across the company. 43 00:01:42.110 --> 00:01:45.690 SAM RANSBOTHAM: You didn't start off as a CTO of Black & Decker. 44 00:01:45.690 --> 00:01:47.733 Tell us a bit about how you ended up there. 45 00:01:47.733 --> 00:01:48.503 46 00:01:48.503 --> 00:01:50.670 MARK MAYBURY: If you look at my history -- you know, 47 00:01:50.670 --> 00:01:52.253 "How did you get interested in AI?" -- 48 00:01:52.253 --> 00:01:55.410 I mean, AI started when literally, I was 13 years old. 49 00:01:55.410 --> 00:01:58.360 I vividly remember this; it's one of those poignant memories: 50 00:01:58.360 --> 00:02:02.690 [In] 1977, I saw Star Wars, and I remember walking out of that 51 00:02:02.690 --> 00:02:05.920 movie being inspired by the conversational robots -- 52 00:02:05.920 --> 00:02:09.727 R2-D2, C-3PO -- and the artificial intelligence between 53 00:02:09.727 --> 00:02:10.810 the human and the machine. 54 00:02:10.810 --> 00:02:12.185 And I didn't know it at the time, 55 00:02:12.185 --> 00:02:15.350 but I was fascinated by augmented intelligence 56 00:02:15.350 --> 00:02:17.490 and by ambient intelligence. 57 00:02:17.490 --> 00:02:19.680 They had these machines that were smart 58 00:02:19.680 --> 00:02:21.250 and these robots that were smart. 59 00:02:21.250 --> 00:02:24.880 And then that transitioned into a love 60 00:02:24.880 --> 00:02:27.180 of actually understanding the human mind. 61 00:02:27.180 --> 00:02:30.840 In college, I studied with a number of neuropsychologists 62 00:02:30.840 --> 00:02:32.620 as a Fenwick scholar at Holy Cross, 63 00:02:32.620 --> 00:02:35.240 working also with some Boston University faculty, 64 00:02:35.240 --> 00:02:38.260 and we built a system to diagnose brain disorders 65 00:02:38.260 --> 00:02:39.500 in 1986. 66 00:02:39.500 --> 00:02:41.710 It's a long time ago, but that introduced me 67 00:02:41.710 --> 00:02:43.740 into Bayesian reasoning and so on. 68 00:02:43.740 --> 00:02:47.120 And then, when I initiated my career, 69 00:02:47.120 --> 00:02:51.380 I was trained really globally, so I studied in Venezuela 70 00:02:51.380 --> 00:02:53.940 as a high school student; as an undergraduate, 71 00:02:53.940 --> 00:02:57.320 I spent eight months in Italy learning Italian; 72 00:02:57.320 --> 00:02:59.000 and then I went to England and Cambridge 73 00:02:59.000 --> 00:02:59.870 and I learned English. 74 00:02:59.870 --> 00:03:00.725 SAM RANSBOTHAM: The real English. 75 00:03:00.725 --> 00:03:01.030 76 00:03:01.030 --> 00:03:01.800 MARK MAYBURY: The real English. 77 00:03:01.800 --> 00:03:02.300 [Laughs.] 78 00:03:02.300 --> 00:03:04.220 SAM RANSBOTHAM: C-3PO would be proud. 79 00:03:04.220 --> 00:03:05.680 MARK MAYBURY: C-3PO, exactly! 80 00:03:05.680 --> 00:03:06.180 ... 81 00:03:06.180 --> 00:03:08.690 Indeed, my master's was in speech and language processing. 82 00:03:08.690 --> 00:03:09.940 Sorry, you can't make this up. 83 00:03:09.940 --> 00:03:12.380 I worked with Karen Spark Jones, a professor there 84 00:03:12.380 --> 00:03:14.250 who was one of the great godmothers 85 00:03:14.250 --> 00:03:15.820 of computational linguistics. 86 00:03:15.820 --> 00:03:19.050 But then I transitioned back to becoming an Air Force officer, 87 00:03:19.050 --> 00:03:21.810 and right away, I got interested in security: 88 00:03:21.810 --> 00:03:25.320 national security, computer security, AI security. 89 00:03:25.320 --> 00:03:26.900 I didn't know it at the time, but we 90 00:03:26.900 --> 00:03:29.400 were developing knowledge-based software development, 91 00:03:29.400 --> 00:03:30.900 and we'd think about "How do we make 92 00:03:30.900 --> 00:03:33.090 sure the software is secure?" 93 00:03:33.090 --> 00:03:35.610 Fast-forward to 20 years later. 94 00:03:35.610 --> 00:03:38.490 I was asked to lead a federal laboratory, the National 95 00:03:38.490 --> 00:03:41.340 Cybersecurity Federally Funded Laboratory at Mitre, 96 00:03:41.340 --> 00:03:43.713 supporting NIST [the National Institute of Standards 97 00:03:43.713 --> 00:03:44.380 and Technology]. 98 00:03:44.380 --> 00:03:47.860 I had come up the ranks as an AI person applying AI to a whole 99 00:03:47.860 --> 00:03:50.220 bunch of domains, including computer security -- 100 00:03:50.220 --> 00:03:53.020 building insider threat detection modules, 101 00:03:53.020 --> 00:03:56.000 building penetration testing, automated agents, 102 00:03:56.000 --> 00:03:59.110 doing a lot of machine learning of malware -- 103 00:03:59.110 --> 00:04:01.630 working with some really great scientists at Mitre, 104 00:04:01.630 --> 00:04:03.590 in the federal government, and beyond, 105 00:04:03.590 --> 00:04:06.640 [in] agencies and in commercial companies. 106 00:04:06.640 --> 00:04:08.830 And so that really transformed my mind 107 00:04:08.830 --> 00:04:10.820 in terms of how do we for example, 108 00:04:10.820 --> 00:04:13.370 I'll never forget working together with some scientists 109 00:04:13.370 --> 00:04:17.209 on the first ability to secure medicine pumps that 110 00:04:17.209 --> 00:04:20.649 are the most frequently used device in a hospital. 111 00:04:20.649 --> 00:04:22.930 And so that's the kind of foundation 112 00:04:22.930 --> 00:04:26.730 of security thinking and risk management that comes through. 113 00:04:26.730 --> 00:04:28.720 I got to work with the great Donna Dodson 114 00:04:28.720 --> 00:04:31.000 at NIST and other great leaders. 115 00:04:31.000 --> 00:04:33.980 And so those really were foundational theoretical and 116 00:04:33.980 --> 00:04:37.782 practical underpinnings that shaped my thinking in security. 117 00:04:37.782 --> 00:04:39.990 SAM RANSBOTHAM: But doesn't it drive you crazy, then, 118 00:04:39.990 --> 00:04:43.020 that so much the world has this "build it and then secure it 119 00:04:43.020 --> 00:04:43.960 later" approach? 120 00:04:43.960 --> 00:04:47.010 I feel like that's pervasive in software 121 00:04:47.010 --> 00:04:50.470 in general, but certainly around artificial intelligence 122 00:04:50.470 --> 00:04:51.260 applications. 123 00:04:51.260 --> 00:04:54.403 It's always the features first and secure it later. 124 00:04:54.403 --> 00:04:55.570 Doesn't it drive you insane? 125 00:04:55.570 --> 00:04:56.670 How can we change that? 126 00:04:56.670 --> 00:04:59.930 MARK MAYBURY: There are methods and good practices, best 127 00:04:59.930 --> 00:05:03.000 practices, for building resilience into systems, 128 00:05:03.000 --> 00:05:05.870 and it turns out that resilience can be achieved 129 00:05:05.870 --> 00:05:07.560 in a whole variety of ways. 130 00:05:07.560 --> 00:05:09.160 For example, we mentioned diversity. 131 00:05:09.160 --> 00:05:10.500 That's just one strategy. 132 00:05:10.500 --> 00:05:12.170 Another strategy is loose coupling. 133 00:05:12.170 --> 00:05:15.750 The reason pagodas famously last for hundreds and hundreds 134 00:05:15.750 --> 00:05:17.510 of years in Japan is because they're 135 00:05:17.510 --> 00:05:19.460 built with structures like, for example, 136 00:05:19.460 --> 00:05:21.450 central structures that are really strong, 137 00:05:21.450 --> 00:05:24.660 but also hanging structures that loosely couple and that 138 00:05:24.660 --> 00:05:27.130 can absorb, for example, energy from the earth 139 00:05:27.130 --> 00:05:28.430 when you get earthquakes. 140 00:05:28.430 --> 00:05:30.940 So these design principles, if you think about a loosely 141 00:05:30.940 --> 00:05:33.590 coupled cyber or a piece of software system, 142 00:05:33.590 --> 00:05:37.250 and even of course decoupling things so that you disaggregate 143 00:05:37.250 --> 00:05:41.270 capabilities, so that if a power system or a software system 144 00:05:41.270 --> 00:05:45.380 goes down locally, it doesn't affect everyone globally -- 145 00:05:45.380 --> 00:05:47.280 some of these principles need to be applied. 146 00:05:47.280 --> 00:05:49.710 They're systems security principles, 147 00:05:49.710 --> 00:05:52.130 but they can absolutely be applied in AI. 148 00:05:52.130 --> 00:05:54.780 I mean, it's amazing how effective people can 149 00:05:54.780 --> 00:05:57.090 be when they're in an accident. 150 00:05:57.090 --> 00:05:58.680 They've got broken bones, they've 151 00:05:58.680 --> 00:06:01.570 got maybe damaged organs, and yet they're still alive. 152 00:06:01.570 --> 00:06:03.040 They're still functioning. 153 00:06:03.040 --> 00:06:04.220 How does that happen? 154 00:06:04.220 --> 00:06:07.620 And so nature's a good inspiration for us. 155 00:06:07.620 --> 00:06:09.830 We can't forget, in the end, our company 156 00:06:09.830 --> 00:06:12.110 has a purpose for those who make the world. 157 00:06:12.110 --> 00:06:16.225 And that means that we have to be empathetic and understanding 158 00:06:16.225 --> 00:06:18.600 of the environments in which these technologies are going 159 00:06:18.600 --> 00:06:21.340 to go into and make sure that they're intuitive, 160 00:06:21.340 --> 00:06:23.470 they're transparent, they're learnable, 161 00:06:23.470 --> 00:06:26.300 they're adaptable to those various environments, 162 00:06:26.300 --> 00:06:29.620 so that we serve those makers of the world effectively. 163 00:06:29.620 --> 00:06:32.500 SHERVIN KHODABANDEH: Mark, as you're describing innovation, 164 00:06:32.500 --> 00:06:34.955 I think your brand is very well recognized and a lot 165 00:06:34.955 --> 00:06:37.330 of our audience would know [it], but could you just maybe 166 00:06:37.330 --> 00:06:41.980 quickly cover -- what does Stanley Black & Decker do, 167 00:06:41.980 --> 00:06:44.950 and how have some of these innovations maybe changed 168 00:06:44.950 --> 00:06:46.620 the company for the better? 169 00:06:46.620 --> 00:06:48.470 MARK MAYBURY: Well, it's a great question. 170 00:06:48.470 --> 00:06:50.730 One of the delights of coming to this company 171 00:06:50.730 --> 00:06:52.520 was learning what it does. 172 00:06:52.520 --> 00:06:54.110 So I knew Stanley Black & Decker, 173 00:06:54.110 --> 00:06:56.580 like many of your listeners will know, 174 00:06:56.580 --> 00:07:00.350 as a company that makes DeWalt tools -- 175 00:07:00.350 --> 00:07:02.493 hand tools or power tools or storage devices. 176 00:07:02.493 --> 00:07:04.660 Those are the things that you're very familiar with. 177 00:07:04.660 --> 00:07:07.480 But it turns out that we also have 178 00:07:07.480 --> 00:07:09.760 a several-billion-dollar industrial business. 179 00:07:09.760 --> 00:07:13.210 We robotically insert fasteners into cars, 180 00:07:13.210 --> 00:07:15.880 and it turns out that 9 out of every 10 cars 181 00:07:15.880 --> 00:07:17.610 or light trucks on the road today 182 00:07:17.610 --> 00:07:20.100 are held together by Stanley fasteners. 183 00:07:20.100 --> 00:07:21.940 Similarly, I didn't know beforehand, 184 00:07:21.940 --> 00:07:25.050 but in 1930 we invented the electronic door -- 185 00:07:25.050 --> 00:07:26.080 the sliding door. 186 00:07:26.080 --> 00:07:30.680 So next time you walk into a Home Goods or Home Depot 187 00:07:30.680 --> 00:07:33.520 or a Lowe's, or even a hospital or a bank, 188 00:07:33.520 --> 00:07:36.220 if you look up and you look to the left, you'll notice -- 189 00:07:36.220 --> 00:07:38.740 there's a 1-in-2 chance there'll be a Stanley logo, 190 00:07:38.740 --> 00:07:42.290 because we manufacture one of every two electronic doors 191 00:07:42.290 --> 00:07:43.767 in North America. 192 00:07:43.767 --> 00:07:46.100 And there are other examples, but those are innovations, 193 00:07:46.100 --> 00:07:48.370 whether it be protecting 2 million babies 194 00:07:48.370 --> 00:07:52.200 with real-time location services in our health care business 195 00:07:52.200 --> 00:07:55.590 or producing eco-friendly rivets that lightweight electric 196 00:07:55.590 --> 00:07:56.310 vehicles [use]. 197 00:07:56.310 --> 00:07:58.930 These are some examples of the kind of innovations that we're 198 00:07:58.930 --> 00:08:00.440 continuously developing. 199 00:08:00.440 --> 00:08:04.200 Because basically, every second, 10 Stanley tools are sold 200 00:08:04.200 --> 00:08:06.530 around the world -- every second. 201 00:08:06.530 --> 00:08:09.620 And so whether it's Black & Decker, whether it's 202 00:08:09.620 --> 00:08:13.510 DeWalt, whether it's Craftsman, these are household brands 203 00:08:13.510 --> 00:08:15.590 that we have the privilege to influence 204 00:08:15.590 --> 00:08:17.190 the inventive future of. 205 00:08:17.190 --> 00:08:19.130 SHERVIN KHODABANDEH: You're really everywhere. 206 00:08:19.130 --> 00:08:20.970 And every time I sit in my car now, 207 00:08:20.970 --> 00:08:24.580 I'm going to remember that, like the strong force that 208 00:08:24.580 --> 00:08:27.760 keeps the nucleus together, you are keeping my car together. 209 00:08:27.760 --> 00:08:28.750 That's fantastic. 210 00:08:28.750 --> 00:08:31.490 Can you give us an example of extreme innovation 211 00:08:31.490 --> 00:08:33.200 versus nonextreme? 212 00:08:33.200 --> 00:08:34.169 MARK MAYBURY: Sure. 213 00:08:34.169 --> 00:08:36.789 By extreme, we really mean innovation 214 00:08:36.789 --> 00:08:40.230 of everything, innovation everywhere, innovation 215 00:08:40.230 --> 00:08:41.380 by everyone. 216 00:08:41.380 --> 00:08:44.760 We actually, interestingly, within the company delineate 217 00:08:44.760 --> 00:08:47.580 between six different levels of innovation. 218 00:08:47.580 --> 00:08:49.380 We're just in the past six months 219 00:08:49.380 --> 00:08:53.260 becoming much more disciplined across the entire corporation, 220 00:08:53.260 --> 00:08:56.460 with a common rubric for how we characterize things. 221 00:08:56.460 --> 00:08:58.040 So it's a great question. 222 00:08:58.040 --> 00:09:01.580 Levels one and two, those are incremental improvements, 223 00:09:01.580 --> 00:09:03.530 let's say to product or a service. 224 00:09:03.530 --> 00:09:05.082 Once we get to level three, we're 225 00:09:05.082 --> 00:09:06.540 talking about something where we're 226 00:09:06.540 --> 00:09:08.330 beginning to make some significant change 227 00:09:08.330 --> 00:09:09.150 to a product. 228 00:09:09.150 --> 00:09:12.250 When we get to level four, we're talking about maybe three 229 00:09:12.250 --> 00:09:13.992 major or more new features. 230 00:09:13.992 --> 00:09:15.450 It's something that, really, you're 231 00:09:15.450 --> 00:09:17.140 going to significantly notice. 232 00:09:17.140 --> 00:09:19.620 When we talk about a level five, this 233 00:09:19.620 --> 00:09:21.780 is first of a kind, at least for us. 234 00:09:21.780 --> 00:09:24.810 It's something that we may have never experienced 235 00:09:24.810 --> 00:09:26.200 in our marketplace. 236 00:09:26.200 --> 00:09:28.680 Those we oftentimes call breakthrough innovations. 237 00:09:28.680 --> 00:09:32.000 And finally, on level six, which are radical innovations, 238 00:09:32.000 --> 00:09:34.000 those are things that are world firsts. 239 00:09:34.000 --> 00:09:35.660 And to give you a concrete example, 240 00:09:35.660 --> 00:09:38.640 we just introduced to the marketplace the first 241 00:09:38.640 --> 00:09:41.400 utilization of pouch battery technologies, 242 00:09:41.400 --> 00:09:44.270 a successor to the Flexvolt batteries, 243 00:09:44.270 --> 00:09:46.450 which are essentially an ability, 244 00:09:46.450 --> 00:09:49.970 using pouch technology, to double the power -- 245 00:09:49.970 --> 00:09:53.500 a [100%] increase in power in batteries for tools, 246 00:09:53.500 --> 00:09:57.010 two times the life cycle, reductions in the weight 247 00:09:57.010 --> 00:09:58.283 and the size of those. 248 00:09:58.283 --> 00:10:00.700 So that's an example of an extreme innovation that's going 249 00:10:00.700 --> 00:10:02.890 to revolutionize power tools. 250 00:10:02.890 --> 00:10:04.760 That's called Powerstack. 251 00:10:04.760 --> 00:10:07.720 Another example we brought forward in Black & Decker [is] 252 00:10:07.720 --> 00:10:08.220 Pria. 253 00:10:08.220 --> 00:10:11.680 Pria is the first conversational home health care companion. 254 00:10:11.680 --> 00:10:14.590 It's an example of using speech and language 255 00:10:14.590 --> 00:10:18.720 and conversational technology to support medicine distribution 256 00:10:18.720 --> 00:10:21.250 to those who want to age in place, for example, 257 00:10:21.250 --> 00:10:25.770 in the home, but also using AI to detect anomalies and alert 258 00:10:25.770 --> 00:10:27.480 caretakers to individuals. 259 00:10:27.480 --> 00:10:30.680 So those are examples that can be really transformative. 260 00:10:30.680 --> 00:10:33.070 SHERVIN KHODABANDEH: Levels one through six 261 00:10:33.070 --> 00:10:36.120 implies there is a portfolio and that there 262 00:10:36.120 --> 00:10:41.530 is an intention about how to build and manage and evolve 263 00:10:41.530 --> 00:10:42.590 that portfolio. 264 00:10:42.590 --> 00:10:44.820 Can you comment a bit [on] how you think about that 265 00:10:44.820 --> 00:10:47.790 and how much, like in level one versus level six, 266 00:10:47.790 --> 00:10:50.500 and what are some of the trade-offs that you consider? 267 00:10:50.500 --> 00:10:52.870 MARK MAYBURY: That's an excellent question. 268 00:10:52.870 --> 00:10:56.160 Basically, it is really market-driven, 269 00:10:56.160 --> 00:10:58.490 and it's even going to be further product 270 00:10:58.490 --> 00:10:59.590 and segment-driven. 271 00:10:59.590 --> 00:11:01.960 If you're selling a software service, 272 00:11:01.960 --> 00:11:04.510 you're going to want to have that modified almost [in] 273 00:11:04.510 --> 00:11:05.010 real time. 274 00:11:05.010 --> 00:11:07.040 But certainly within months, you're 275 00:11:07.040 --> 00:11:09.200 going to want to be evolving that service so 276 00:11:09.200 --> 00:11:11.970 that incremental modification might occur. 277 00:11:11.970 --> 00:11:15.720 We have an ability to just upload a new version 278 00:11:15.720 --> 00:11:18.640 of our cyber-physical end effector, if you will -- 279 00:11:18.640 --> 00:11:20.220 whatever it happens to be. 280 00:11:20.220 --> 00:11:23.260 But to answer your question, oftentimes companies will, 281 00:11:23.260 --> 00:11:26.090 over time, if they don't pay attention to their level one 282 00:11:26.090 --> 00:11:28.140 through six -- so from incremental all the way up 283 00:11:28.140 --> 00:11:31.160 to radical -- they'll end up with a portfolio that drifts 284 00:11:31.160 --> 00:11:33.950 to the incrementalism, that's only focused on minor 285 00:11:33.950 --> 00:11:34.740 modifications. 286 00:11:34.740 --> 00:11:35.960 Those are easy to do. 287 00:11:35.960 --> 00:11:38.390 You get an immediate benefit in the marketplace, 288 00:11:38.390 --> 00:11:41.750 but you don't get a long-term, a medium- or long-term, shift. 289 00:11:41.750 --> 00:11:44.820 And so what we intentionally do is measure 290 00:11:44.820 --> 00:11:49.490 in empirical fashion, how much growth and how much margin 291 00:11:49.490 --> 00:11:52.320 and how much consumer satisfaction am 292 00:11:52.320 --> 00:11:55.280 I getting out of those level ones all the way up 293 00:11:55.280 --> 00:11:56.380 to level sixes? 294 00:11:56.380 --> 00:11:59.890 Because any organization is going to naturally be resource 295 00:11:59.890 --> 00:12:02.450 constrained in terms of money, in terms of time, 296 00:12:02.450 --> 00:12:03.770 in terms of talent. 297 00:12:03.770 --> 00:12:07.060 What you need to do is you need to ideally optimize. 298 00:12:07.060 --> 00:12:11.140 And if the marketplace is rewarding you for, let's say, 299 00:12:11.140 --> 00:12:14.640 having new products and services in class four, 300 00:12:14.640 --> 00:12:18.140 which have major improvements, but they penalize you 301 00:12:18.140 --> 00:12:21.020 for having radical improvements because they just can't ... 302 00:12:21.020 --> 00:12:24.130 it's this cognitive dissonance: "What do you mean home health 303 00:12:24.130 --> 00:12:24.767 companion? 304 00:12:24.767 --> 00:12:25.850 I don't know what that is. 305 00:12:25.850 --> 00:12:27.690 I just want a better tongue depressor." 306 00:12:27.690 --> 00:12:30.040 And so in that case, you really need 307 00:12:30.040 --> 00:12:32.430 to appreciate what the marketplace is 308 00:12:32.430 --> 00:12:35.120 willing to adopt, and we have to think about, 309 00:12:35.120 --> 00:12:37.120 if you do have a radical innovation, how are you 310 00:12:37.120 --> 00:12:38.240 going to get into the channel? 311 00:12:38.240 --> 00:12:40.448 And one final thing I'll say, because your question's 312 00:12:40.448 --> 00:12:42.220 an excellent one about portfolio, 313 00:12:42.220 --> 00:12:44.750 is, we actually go one step further, 314 00:12:44.750 --> 00:12:47.000 which is not only do we look at what 315 00:12:47.000 --> 00:12:49.340 the distribution of the classes are 316 00:12:49.340 --> 00:12:53.350 and what the response to those investments are over time, 317 00:12:53.350 --> 00:12:55.990 but we further, for any particular individual 318 00:12:55.990 --> 00:13:00.040 investment, we actually put it into a portfolio 319 00:13:00.040 --> 00:13:03.610 that characterizes the technical risk and the business risk. 320 00:13:03.610 --> 00:13:05.650 We actually use technical readiness levels, 321 00:13:05.650 --> 00:13:07.720 which come out of NASA and the Air Force -- 322 00:13:07.720 --> 00:13:11.120 my previous life -- and are used now in the business community, 323 00:13:11.120 --> 00:13:12.483 and then we invent it. 324 00:13:12.483 --> 00:13:14.150 Actually, previously, when I was working 325 00:13:14.150 --> 00:13:16.608 for the federal government, we created commercial readiness 326 00:13:16.608 --> 00:13:19.760 levels, and I've imported those into Stanley Black & Decker. 327 00:13:19.760 --> 00:13:21.540 And now we actually have a portfolio 328 00:13:21.540 --> 00:13:24.430 for every single business and the company as a whole, 329 00:13:24.430 --> 00:13:25.630 for the first time ever. 330 00:13:25.630 --> 00:13:29.090 And that's something that we're really delighted to finally 331 00:13:29.090 --> 00:13:31.600 bring to the company -- an ability to look 332 00:13:31.600 --> 00:13:34.570 at our investments as a portfolio -- 333 00:13:34.570 --> 00:13:37.880 because only then can we see, are we trying to achieve 334 00:13:37.880 --> 00:13:40.310 "unobtainium," because it's technically unachievable, 335 00:13:40.310 --> 00:13:44.485 or, equally bad, is there no market for this? 336 00:13:44.485 --> 00:13:46.360 You may invent something that's really great, 337 00:13:46.360 --> 00:13:48.290 and if the customer doesn't care for it, 338 00:13:48.290 --> 00:13:50.320 it's not going to be commercially viable. 339 00:13:50.320 --> 00:13:52.100 And so those are important dimensions 340 00:13:52.100 --> 00:13:54.090 to look at in portfolio analysis. 341 00:13:54.090 --> 00:13:54.590 342 00:13:54.590 --> 00:13:56.290 SHERVIN KHODABANDEH: I'm really happy that you covered risk, 343 00:13:56.290 --> 00:13:58.415 because that was going to be my follow-on question. 344 00:13:58.415 --> 00:14:04.080 Even that must be a spectrum of risk and a decision: 345 00:14:04.080 --> 00:14:06.970 How much risk is the right level of risk, 346 00:14:06.970 --> 00:14:11.840 and how much do you wait to know whether the market's really 347 00:14:11.840 --> 00:14:13.182 liking something or not? 348 00:14:13.182 --> 00:14:14.890 I'm not going to put words in your mouth, 349 00:14:14.890 --> 00:14:18.030 but I was just going to infer from that, that you were saying 350 00:14:18.030 --> 00:14:22.960 that's a lever and a decision that you guys manage based 351 00:14:22.960 --> 00:14:26.620 on how the economics of the market and the company are, 352 00:14:26.620 --> 00:14:30.430 and when you want to be more risky versus less risky. 353 00:14:30.430 --> 00:14:31.520 MARK MAYBURY: Absolutely. 354 00:14:31.520 --> 00:14:34.570 And there are many voices that get an opportunity to influence 355 00:14:34.570 --> 00:14:35.480 the market dynamics. 356 00:14:35.480 --> 00:14:37.877 If you think of the Five Forces model of Porter, 357 00:14:37.877 --> 00:14:40.210 classically you've got your competitors, your suppliers, 358 00:14:40.210 --> 00:14:42.560 your customers, and yourself, and all 359 00:14:42.560 --> 00:14:44.610 of these competitive forces are active. 360 00:14:44.610 --> 00:14:47.540 And so one of the things we try to do is measure, is listen. 361 00:14:47.540 --> 00:14:50.460 Our leadership model within our operating model at the company 362 00:14:50.460 --> 00:14:52.970 is listen, learn, and lead. 363 00:14:52.970 --> 00:14:56.110 That listening and learning part is really, really critical; 364 00:14:56.110 --> 00:14:58.500 if you're not listening to the right signals -- 365 00:14:58.500 --> 00:15:00.210 if you don't have a customer signal, 366 00:15:00.210 --> 00:15:02.820 you don't have a technological disruption signal, 367 00:15:02.820 --> 00:15:04.490 if you don't have an economic signal, 368 00:15:04.490 --> 00:15:06.470 a manufacturing and supply signal, 369 00:15:06.470 --> 00:15:07.960 you need all those signals. 370 00:15:07.960 --> 00:15:10.590 And then, importantly, you need lessons learned; 371 00:15:10.590 --> 00:15:12.270 you need good practices. 372 00:15:12.270 --> 00:15:14.360 Early in the idea generation side, 373 00:15:14.360 --> 00:15:16.370 are you using design thinking? 374 00:15:16.370 --> 00:15:18.380 Are you using diverse teams? 375 00:15:18.380 --> 00:15:21.230 Are you gathering insights in an effective way? 376 00:15:21.230 --> 00:15:24.680 And then, as you go through to generating opportunities, 377 00:15:24.680 --> 00:15:27.200 are you beginning to do competitive analysis, 378 00:15:27.200 --> 00:15:28.480 like I just talked about? 379 00:15:28.480 --> 00:15:31.450 As you begin to look into these specific business cases, 380 00:15:31.450 --> 00:15:34.690 are you trying things out with concept cars or proof 381 00:15:34.690 --> 00:15:36.410 of concepts and then getting to ... 382 00:15:36.410 --> 00:15:38.530 "Maybe we don't have the solution. 383 00:15:38.530 --> 00:15:40.480 Maybe we ought to have some open innovation 384 00:15:40.480 --> 00:15:41.920 outside the company." 385 00:15:41.920 --> 00:15:44.930 And then, ultimately, in our commercial execution, 386 00:15:44.930 --> 00:15:48.200 do they have the right sales teams, the right channels, 387 00:15:48.200 --> 00:15:50.430 the right partnerships to go to scale? 388 00:15:50.430 --> 00:15:52.960 And so the challenge is, we can oftentimes -- 389 00:15:52.960 --> 00:15:54.850 whether it be manufacturing or products -- 390 00:15:54.850 --> 00:15:56.890 we can get into pilot purgatory. 391 00:15:56.890 --> 00:15:59.540 We can create something that looks really, really exciting 392 00:15:59.540 --> 00:16:01.600 and promising to the marketplace, 393 00:16:01.600 --> 00:16:04.670 but it's unmanufacturable, or it's unsustainable, 394 00:16:04.670 --> 00:16:07.150 or it's uninteresting or uneconomical. 395 00:16:07.150 --> 00:16:08.440 And that's really not good. 396 00:16:08.440 --> 00:16:11.440 You really have to have a holistic intent in mind 397 00:16:11.440 --> 00:16:13.700 throughout the process, and then, importantly, 398 00:16:13.700 --> 00:16:17.490 a discipline to test and to measure 399 00:16:17.490 --> 00:16:20.020 and to fail fast and eventually be 400 00:16:20.020 --> 00:16:23.350 ready to scale quickly when something does actually hit, 401 00:16:23.350 --> 00:16:25.645 if you will, the sweet spot in the market. 402 00:16:25.645 --> 00:16:27.770 SAM RANSBOTHAM: So there's lots of different things 403 00:16:27.770 --> 00:16:28.750 in these levels. 404 00:16:28.750 --> 00:16:30.620 Can you tie them to artificial intelligence? 405 00:16:30.620 --> 00:16:34.680 Like, is artificial intelligence more risky in market risk? 406 00:16:34.680 --> 00:16:37.460 Is it more risky in technical risk? 407 00:16:37.460 --> 00:16:40.300 How is that affecting each of your different levels, 408 00:16:40.300 --> 00:16:42.130 and what's the intersection of that matrix 409 00:16:42.130 --> 00:16:43.562 with artificial intelligence? 410 00:16:43.562 --> 00:16:44.770 MARK MAYBURY: Great question. 411 00:16:44.770 --> 00:16:47.500 Our AI really applies across the entire company. 412 00:16:47.500 --> 00:16:49.570 We have robotic process automation [RPA],, 413 00:16:49.570 --> 00:16:52.920 which is off the shelf, low risk, provable, 414 00:16:52.920 --> 00:16:57.120 and we automate IT elements in finance, elements in HR. 415 00:16:57.120 --> 00:17:00.480 We have actually almost 160 digital employees today 416 00:17:00.480 --> 00:17:02.460 that just do automated processes, 417 00:17:02.460 --> 00:17:04.920 and it makes our own we call it not only AI, 418 00:17:04.920 --> 00:17:07.000 but sometimes augmented intelligence, not 419 00:17:07.000 --> 00:17:08.109 artificial intelligence. 420 00:17:08.109 --> 00:17:10.670 How do we augment the human to get them more effective? 421 00:17:10.670 --> 00:17:12.170 So to your question, what's risky -- 422 00:17:12.170 --> 00:17:13.290 SAM RANSBOTHAM: That seems [like] less risk. 423 00:17:13.290 --> 00:17:14.748 MARK MAYBURY: That's very low risk. 424 00:17:14.748 --> 00:17:16.109 RPAs are very, very low risk. 425 00:17:16.109 --> 00:17:19.030 However, if I'm going to introduce Pria 426 00:17:19.030 --> 00:17:21.450 into the marketplace, or Insight, 427 00:17:21.450 --> 00:17:24.280 which is a capability in our Stanley industrial business 428 00:17:24.280 --> 00:17:26.790 for IoT measurement, for predictive analytics, 429 00:17:26.790 --> 00:17:31.950 for shears indoor extensions, to very large-scale excavation 430 00:17:31.950 --> 00:17:34.380 equipment, and so on -- in that case, 431 00:17:34.380 --> 00:17:37.250 there could be a very high risk, because there might be user 432 00:17:37.250 --> 00:17:40.100 adoption risk, there's sensor relevance risk, 433 00:17:40.100 --> 00:17:42.350 there's making sure your predictions are going to work 434 00:17:42.350 --> 00:17:42.860 well. 435 00:17:42.860 --> 00:17:45.950 There could be a safety risk as well as an economic risk. 436 00:17:45.950 --> 00:17:48.620 So you want to be really, really careful to make sure 437 00:17:48.620 --> 00:17:51.265 that those technologies in those higher-risk areas 438 00:17:51.265 --> 00:17:53.640 will work really, really well, because they might be life 439 00:17:53.640 --> 00:17:56.580 critical if you're giving advice to a patient 440 00:17:56.580 --> 00:17:59.780 or you're giving guidance to an operator of a very 441 00:17:59.780 --> 00:18:01.120 big piece of machinery. 442 00:18:01.120 --> 00:18:03.900 And so we really have AI across our whole business, 443 00:18:03.900 --> 00:18:06.970 including, by the way, in our factories on automation. 444 00:18:06.970 --> 00:18:09.360 One of the ways we mitigate risk there is we partner. 445 00:18:09.360 --> 00:18:11.660 We work with others so that they actually 446 00:18:11.660 --> 00:18:14.350 have de-risked a lot of the technologies. 447 00:18:14.350 --> 00:18:17.150 So you'll see mobile robots from third parties, 448 00:18:17.150 --> 00:18:20.120 you'll see collaborative robots from parties 449 00:18:20.120 --> 00:18:21.740 that we're customizing and putting 450 00:18:21.740 --> 00:18:24.470 to scale in our factories and de-risking them. 451 00:18:24.470 --> 00:18:26.870 So on that matrix, they're much more 452 00:18:26.870 --> 00:18:29.315 distributed across the spectrum of risk. 453 00:18:29.315 --> 00:18:30.690 SAM RANSBOTHAM: One of the things 454 00:18:30.690 --> 00:18:32.607 that Shervin and I've talked about a few times 455 00:18:32.607 --> 00:18:36.720 is this idea how artificial intelligence maybe 456 00:18:36.720 --> 00:18:39.130 even steers people toward these incremental improvements. 457 00:18:39.130 --> 00:18:42.380 Maybe it's [that] the ability for these algorithms to improve 458 00:18:42.380 --> 00:18:47.650 an existing process may somehow steer people toward the level 459 00:18:47.650 --> 00:18:49.117 one versus the level six. 460 00:18:49.117 --> 00:18:49.950 Are you seeing that? 461 00:18:49.950 --> 00:18:52.117 Are you able to get to apply artificial intelligence 462 00:18:52.117 --> 00:18:55.210 to these level five, level six types of projects? 463 00:18:55.210 --> 00:18:58.070 MARK MAYBURY: We absolutely have AI across the spectrum. 464 00:18:58.070 --> 00:18:59.830 When it comes to AI, the stuff lower 465 00:18:59.830 --> 00:19:02.060 in the technical-commercial list tends 466 00:19:02.060 --> 00:19:03.490 to be commercially proven. 467 00:19:03.490 --> 00:19:06.050 So it tends to have multiple use cases: 468 00:19:06.050 --> 00:19:07.820 Others have deployed the technology; 469 00:19:07.820 --> 00:19:09.510 it's been battle hardened. 470 00:19:09.510 --> 00:19:12.030 But the reality is, there's whole a series of risks. 471 00:19:12.030 --> 00:19:13.620 And we actually have just recently 472 00:19:13.620 --> 00:19:17.900 published our responsible AI set of policies at the company 473 00:19:17.900 --> 00:19:19.310 and made them publicly available. 474 00:19:19.310 --> 00:19:22.290 So any other diversified industrial or tech company 475 00:19:22.290 --> 00:19:24.440 or other consulting small-to-medium enterprise 476 00:19:24.440 --> 00:19:25.920 can take a look at what we do. 477 00:19:25.920 --> 00:19:28.320 And I'll give you a very simple example, 478 00:19:28.320 --> 00:19:30.610 and it gets a bit to your point of "Well, 479 00:19:30.610 --> 00:19:32.790 will they gravitate to the easier problems?" 480 00:19:32.790 --> 00:19:34.370 Well, not necessarily. 481 00:19:34.370 --> 00:19:36.450 One of the areas of risk is making sure 482 00:19:36.450 --> 00:19:39.780 that your AI sensors or classifiers 483 00:19:39.780 --> 00:19:43.002 are in fact not biased and/or they're resilient. 484 00:19:43.002 --> 00:19:44.710 And one of the ways you make sure they're 485 00:19:44.710 --> 00:19:46.500 resilient and unbiased is you make sure 486 00:19:46.500 --> 00:19:48.740 that you have much more diversified data. 487 00:19:48.740 --> 00:19:51.820 That means if you have more users or more situations that 488 00:19:51.820 --> 00:19:55.420 are using your AI systems and there's active learning going 489 00:19:55.420 --> 00:19:58.120 on -- perhaps reinforcement learning while that 490 00:19:58.120 --> 00:20:01.368 machine's operating, most likely human supervised, 491 00:20:01.368 --> 00:20:03.660 because you want to make sure that you're not releasing 492 00:20:03.660 --> 00:20:07.950 anything that could adversely affect an operator or end user 493 00:20:07.950 --> 00:20:10.020 -- actually, the more data you get, 494 00:20:10.020 --> 00:20:13.250 the better and more effective the more risk you can reduce, 495 00:20:13.250 --> 00:20:15.250 but actually the higher performance you can get. 496 00:20:15.250 --> 00:20:16.542 So it's a bit counterintuitive. 497 00:20:16.542 --> 00:20:19.940 So you can actually become a bit more innovative in some sense, 498 00:20:19.940 --> 00:20:24.130 or just smarter in the AI case, because you have more exposure, 499 00:20:24.130 --> 00:20:27.200 in the same way that people who go through high school 500 00:20:27.200 --> 00:20:29.620 to university to graduate school, 501 00:20:29.620 --> 00:20:31.740 because their challenge is increased 502 00:20:31.740 --> 00:20:34.890 along those levels, their capacity to learn, 503 00:20:34.890 --> 00:20:37.498 to communicate, becomes much more effective as they 504 00:20:37.498 --> 00:20:38.540 go through that training. 505 00:20:38.540 --> 00:20:41.690 Same thing with a machine: You can give it easier examples, 506 00:20:41.690 --> 00:20:43.250 so they might be more incremental, 507 00:20:43.250 --> 00:20:45.560 simple challenges to that system, 508 00:20:45.560 --> 00:20:48.840 and as I get more difficult -- so I go from the consumer, 509 00:20:48.840 --> 00:20:53.670 to the prosumer, to the pro -- my intelligence of that system 510 00:20:53.670 --> 00:20:55.600 because the pro knows a lot more. 511 00:20:55.600 --> 00:20:59.020 She's been out working, constructing, for 20 years 512 00:20:59.020 --> 00:21:02.720 or building things in a factory for a long time 513 00:21:02.720 --> 00:21:06.110 and knows what kinds of learning that machine 514 00:21:06.110 --> 00:21:08.510 can leverage and can expose that machine 515 00:21:08.510 --> 00:21:10.260 to more sophisticated learning. 516 00:21:10.260 --> 00:21:11.880 For example, for predictive analytics, 517 00:21:11.880 --> 00:21:14.000 if I want to predict an outage, if I've only 518 00:21:14.000 --> 00:21:15.595 seen one kind of outage, I will only 519 00:21:15.595 --> 00:21:16.970 be able to deal with that outage. 520 00:21:16.970 --> 00:21:19.980 If I've seen 30 different kinds of outages, 521 00:21:19.980 --> 00:21:22.590 I'm much better and much more resilient, 522 00:21:22.590 --> 00:21:26.120 because I know both what I know, but equally important -- 523 00:21:26.120 --> 00:21:27.440 perhaps more important -- 524 00:21:27.440 --> 00:21:28.910 I know what I don't know. 525 00:21:28.910 --> 00:21:31.200 And if I see something for the first time, 526 00:21:31.200 --> 00:21:32.930 and I've seen 30 different things, 527 00:21:32.930 --> 00:21:34.980 and this is a brand-new one, I can say, 528 00:21:34.980 --> 00:21:37.010 "This doesn't fit with anything I've seen. 529 00:21:37.010 --> 00:21:38.160 I'm ignorant. 530 00:21:38.160 --> 00:21:40.190 Hold up -- let's call a human. 531 00:21:40.190 --> 00:21:41.640 Tell them it's an anomaly. 532 00:21:41.640 --> 00:21:43.120 Let's get the machine to retrain." 533 00:21:43.120 --> 00:21:45.870 SAM RANSBOTHAM: Where did these responsible principles 534 00:21:45.870 --> 00:21:46.370 come from? 535 00:21:46.370 --> 00:21:48.162 Is that something you developed internally, 536 00:21:48.162 --> 00:21:50.506 or is that something you've adopted from somewhere else? 537 00:21:50.506 --> 00:21:50.855 538 00:21:50.855 --> 00:21:52.480 MARK MAYBURY: So first, the motivation. 539 00:21:52.480 --> 00:21:54.410 Why do we care about responsible AI? 540 00:21:54.410 --> 00:21:57.160 It starts from some of my 31 years 541 00:21:57.160 --> 00:21:59.680 working in the public sector, understanding 542 00:21:59.680 --> 00:22:02.620 some of the risks of AI, having been on the government 543 00:22:02.620 --> 00:22:06.440 side funding a lot of startups, a lot of large companies 544 00:22:06.440 --> 00:22:10.300 and small companies, building defense applications 545 00:22:10.300 --> 00:22:14.070 and/or health care applications, national applications for AI. 546 00:22:14.070 --> 00:22:17.362 We recognized the fact that there are lots of failures. 547 00:22:17.362 --> 00:22:18.820 The way I think about the failures, 548 00:22:18.820 --> 00:22:22.130 which motivate responsible AI, is they can be your failure 549 00:22:22.130 --> 00:22:25.720 to if you think of the OODA loop -- observe, orient, decide, 550 00:22:25.720 --> 00:22:26.450 and act. 551 00:22:26.450 --> 00:22:29.990 Observation: You can have bad day data, failed perception, 552 00:22:29.990 --> 00:22:32.050 bias, like I was suggesting. 553 00:22:32.050 --> 00:22:34.780 And so machines literally can be convinced 554 00:22:34.780 --> 00:22:38.050 that they're seeing a yield sign when they see a stop sign. 555 00:22:38.050 --> 00:22:39.720 So there actually have been studies done 556 00:22:39.720 --> 00:22:41.110 that have demonstrated this. 557 00:22:41.110 --> 00:22:43.370 SAM RANSBOTHAM: Right, the adversarial. 558 00:22:43.370 --> 00:22:45.090 MARK MAYBURY: Exactly: adversarial AI. 559 00:22:45.090 --> 00:22:47.420 You can also confuse an AI by biasing a selection, 560 00:22:47.420 --> 00:22:49.810 by mislabeling or misattributing things 561 00:22:49.810 --> 00:22:51.620 so they can be oriented in the wrong way. 562 00:22:51.620 --> 00:22:53.960 See, the classifications I talked about before: 563 00:22:53.960 --> 00:22:57.570 You could force them to see a different thing or misclassify. 564 00:22:57.570 --> 00:23:00.590 Similarly, they can decide poorly. 565 00:23:00.590 --> 00:23:02.560 They could have misinformation; there could 566 00:23:02.560 --> 00:23:04.680 be false cues or confusion. 567 00:23:04.680 --> 00:23:07.400 And we've seen this actually in the flash crash, where 568 00:23:07.400 --> 00:23:09.870 AIs were trained to actually do trading 569 00:23:09.870 --> 00:23:11.820 and then their model didn't actually 570 00:23:11.820 --> 00:23:13.860 recognize when things were going bad, 571 00:23:13.860 --> 00:23:15.560 and poor decisions were made. 572 00:23:15.560 --> 00:23:18.060 And then finally, there can be physical world actions. 573 00:23:18.060 --> 00:23:20.780 We've had a couple of automated vehicles 574 00:23:20.780 --> 00:23:24.570 fail because of either failed human oversight of the AI, 575 00:23:24.570 --> 00:23:27.820 over-trusting the AI, or under-trusting the AI, 576 00:23:27.820 --> 00:23:29.480 and then poor decisions happen. 577 00:23:29.480 --> 00:23:31.500 And so that's the motivation. 578 00:23:31.500 --> 00:23:35.280 And then we studied work in Europe and Singapore 579 00:23:35.280 --> 00:23:36.980 and the World Economic Forum. 580 00:23:36.980 --> 00:23:40.270 In the U.S., there's a whole bunch of work in AI principles, 581 00:23:40.270 --> 00:23:42.780 in algorithmic accountability, and White House 582 00:23:42.780 --> 00:23:44.760 guidance on regulation of AI. 583 00:23:44.760 --> 00:23:46.830 We've been connected into all of these things 584 00:23:46.830 --> 00:23:49.440 as well as connected to the Microsofts and the IBMs 585 00:23:49.440 --> 00:23:51.990 and the Googles of the world in terms of what they're doing 586 00:23:51.990 --> 00:23:53.460 in terms of responsible AI. 587 00:23:53.460 --> 00:23:55.590 And we, as a diversified industrial, 588 00:23:55.590 --> 00:24:00.060 said, "We have these very complicated domain applications 589 00:24:00.060 --> 00:24:05.240 in manufacturing, in aviation, in transportation and tools, 590 00:24:05.240 --> 00:24:07.960 in health care products or just home products, 591 00:24:07.960 --> 00:24:11.490 and so how do we make sure that when we are building AI 592 00:24:11.490 --> 00:24:14.800 into those system, we're doing it in a responsible fashion?" 593 00:24:14.800 --> 00:24:16.900 So that means making sure that we're 594 00:24:16.900 --> 00:24:19.710 transparent in what the AI knows or doesn't know; 595 00:24:19.710 --> 00:24:22.850 making sure that we protect the privacy of the information 596 00:24:22.850 --> 00:24:24.780 we're collecting about the environment, 597 00:24:24.780 --> 00:24:26.730 perhaps, or the people; making sure 598 00:24:26.730 --> 00:24:29.608 that we're equitable in our decisions and unbiased; 599 00:24:29.608 --> 00:24:32.150 making sure that the systems are more resilient, that they're 600 00:24:32.150 --> 00:24:35.340 more symbiotic so we get at that augmented intelligence 601 00:24:35.340 --> 00:24:36.740 piece we talked about before. 602 00:24:36.740 --> 00:24:39.360 All of these are motivations for why, 603 00:24:39.360 --> 00:24:42.380 because we're a company who really firmly believes 604 00:24:42.380 --> 00:24:44.130 in corporate social responsibility, 605 00:24:44.130 --> 00:24:47.630 and in order to achieve that, we have to actually build it 606 00:24:47.630 --> 00:24:50.260 into the products that we're producing 607 00:24:50.260 --> 00:24:52.300 and the methods and the approaches we're taking, 608 00:24:52.300 --> 00:24:54.560 which means making sure that we're stress-testing those, 609 00:24:54.560 --> 00:24:56.100 that we're designing them appropriately. 610 00:24:56.100 --> 00:24:58.010 So that's the motivation for responsible AI. 611 00:24:58.010 --> 00:25:00.468 SAM RANSBOTHAM: What are you excited about at Stanley Black 612 00:25:00.468 --> 00:25:00.968 & Decker? 613 00:25:00.968 --> 00:25:01.468 What's new? 614 00:25:01.468 --> 00:25:03.635 You mentioned projects you've worked on in the past. 615 00:25:03.635 --> 00:25:05.930 Anything exciting you can share that's on the horizon? 616 00:25:05.930 --> 00:25:06.398 617 00:25:06.398 --> 00:25:08.190 MARK MAYBURY: I can't go into great detail, 618 00:25:08.190 --> 00:25:11.210 but what I can say right now for your listeners is, 619 00:25:11.210 --> 00:25:13.920 we have some extreme innovation going on 620 00:25:13.920 --> 00:25:16.232 in the ESG [environmental, social, and governance] 621 00:25:16.232 --> 00:25:18.440 area, specifically when we're talking about net zero. 622 00:25:18.440 --> 00:25:21.670 We've made public these statements that our factories 623 00:25:21.670 --> 00:25:23.570 will be carbon neutral by 2030. 624 00:25:23.570 --> 00:25:26.140 We have 120 factories and distribution centers 625 00:25:26.140 --> 00:25:27.680 around the world. 626 00:25:27.680 --> 00:25:28.930 ... 627 00:25:28.930 --> 00:25:30.850 No government has told us to do that. 628 00:25:30.850 --> 00:25:32.230 That's self-imposed. 629 00:25:32.230 --> 00:25:35.000 And by the way, if you think, "Oh, that's a future thing; 630 00:25:35.000 --> 00:25:37.180 they'll never do it," we're already ahead of target 631 00:25:37.180 --> 00:25:38.490 to get to 2030. 632 00:25:38.490 --> 00:25:41.740 But we're also, by 2025, pulling in a little bit closer, 633 00:25:41.740 --> 00:25:44.378 we're going to be plastic free in our packaging. 634 00:25:44.378 --> 00:25:45.920 So we're getting rid of those blister 635 00:25:45.920 --> 00:25:47.980 packs that we all have gotten so accustomed to. 636 00:25:47.980 --> 00:25:48.480 Why? 637 00:25:48.480 --> 00:25:51.550 Because we want to get rid of microplastics in our water, 638 00:25:51.550 --> 00:25:54.822 in our oceans, and we feel that it's our responsibility 639 00:25:54.822 --> 00:25:55.780 to take the initiative. 640 00:25:55.780 --> 00:25:57.390 No government has asked us to do this. 641 00:25:57.390 --> 00:25:59.830 It's just that we think it's the right thing to do. 642 00:25:59.830 --> 00:26:02.260 We're very, very actively learning right now 643 00:26:02.260 --> 00:26:05.450 about how we get materials that are carbon-free, 644 00:26:05.450 --> 00:26:08.190 how we operate our plants and design products that 645 00:26:08.190 --> 00:26:10.220 will be carbon-free, how we distribute 646 00:26:10.220 --> 00:26:12.580 things in a carbon-neutral way. 647 00:26:12.580 --> 00:26:15.300 This requires a complete rethinking, 648 00:26:15.300 --> 00:26:17.250 and it requires a lot of AI, actually, 649 00:26:17.250 --> 00:26:19.670 because you've got to think about smart design: 650 00:26:19.670 --> 00:26:22.270 Which components can I make to be reusable? 651 00:26:22.270 --> 00:26:23.870 Which can be recyclable? 652 00:26:23.870 --> 00:26:25.810 Which have to be compostable? 653 00:26:25.810 --> 00:26:30.370 The thing here is really to think outside the box. 654 00:26:30.370 --> 00:26:35.320 We're a 179-year-old company, so we've been around for a while. 655 00:26:35.320 --> 00:26:37.540 And, as an officer of the company, 656 00:26:37.540 --> 00:26:39.990 my responsibility is as a steward, really, 657 00:26:39.990 --> 00:26:43.760 to make sure that we progress along the same values that 658 00:26:43.760 --> 00:26:46.280 Frederick Stanley, who was a social entrepreneur, 659 00:26:46.280 --> 00:26:49.032 the first mayor of New Britain [in Connecticut], 660 00:26:49.032 --> 00:26:51.490 [who] turned his factories to building toys when there were 661 00:26:51.490 --> 00:26:53.750 no toys during the war for children I mean, 662 00:26:53.750 --> 00:26:56.480 just a very community-minded individual. 663 00:26:56.480 --> 00:26:59.790 And that legacy, that purpose, continues on in what we do. 664 00:26:59.790 --> 00:27:02.520 And so, yes, we want high-power tools, 665 00:27:02.520 --> 00:27:04.580 and, yes, we want lightweight cars, 666 00:27:04.580 --> 00:27:06.570 and we want all those innovations, 667 00:27:06.570 --> 00:27:08.430 but we want them in a sustainable way. 668 00:27:08.430 --> 00:27:10.213 SAM RANSBOTHAM: Thank you. 669 00:27:10.213 --> 00:27:12.380 I think that many of your things about, for example, 670 00:27:12.380 --> 00:27:14.580 the different levels that you think about innovation 671 00:27:14.580 --> 00:27:16.328 will resonate with listeners. 672 00:27:16.328 --> 00:27:17.620 It's been a great conversation. 673 00:27:17.620 --> 00:27:18.600 Thanks for joining us. 674 00:27:18.600 --> 00:27:19.350 SHERVIN KHODABANDEH: Mark, thanks. 675 00:27:19.350 --> 00:27:21.220 This has been really a great conversation. 676 00:27:21.220 --> 00:27:22.637 MARK MAYBURY: Thank you very much. 677 00:27:22.637 --> 00:27:22.860 678 00:27:22.860 --> 00:27:25.250 SAM RANSBOTHAM: We hope you enjoyed today's episode. 679 00:27:25.250 --> 00:27:27.850 Next time, Shervin and I talk with Sanjay Nichani, 680 00:27:27.850 --> 00:27:30.100 vice president of artificial intelligence and computer 681 00:27:30.100 --> 00:27:31.800 vision at Peloton Interactive. 682 00:27:31.800 --> 00:27:32.942 Please join us. 683 00:27:32.942 --> 00:27:34.400 ALLISON RYDER: Thanks for listening 684 00:27:34.400 --> 00:27:35.910 to Me, Myself, and AI. 685 00:27:35.910 --> 00:27:38.360 We believe, like you, that the conversation 686 00:27:38.360 --> 00:27:40.580 about AI implementation doesn't start and stop 687 00:27:40.580 --> 00:27:41.710 with this podcast. 688 00:27:41.710 --> 00:27:44.250 That's why we've created a group on LinkedIn, specifically 689 00:27:44.250 --> 00:27:45.370 for leaders like you. 690 00:27:45.370 --> 00:27:48.120 It's called AI for Leaders, and if you join us, 691 00:27:48.120 --> 00:27:50.140 you can chat with show creators and hosts, 692 00:27:50.140 --> 00:27:53.750 ask your own questions, share insights, and gain access 693 00:27:53.750 --> 00:27:56.250 to valuable resources about AI implementation 694 00:27:56.250 --> 00:27:58.340 from MIT SMR and BCG. 695 00:27:58.340 --> 00:28:03.460 You can access it by visiting mitsmr.com/AIforLeaders. 696 00:28:03.460 --> 00:28:06.180 We'll put that link in the show notes, 697 00:28:06.180 --> 00:28:08.620 and we hope to see you there. 698 00:28:08.620 --> 00:28:14.000