Podcast - Robots, Rights and New Tech: Balancing Innovation and Data Privacy
In the first episode of the "Two Byte Conversations" podcast series, Data Strategy, Security & Privacy attorney Kevin Angle discusses the evolution of robots and related data privacy considerations with his brother Colin Angle, co-founder and former CEO of iRobot Corporation. They explore the implications of increased autonomy and intelligence as robots continue to advance and examine the impact of generative artificial intelligence (AI) and large data sets on the future of robotics innovation.
Listen and subscribe on Amazon.
Listen and subscribe on Apple Podcasts.
Listen and subscribe on SoundCloud.
Listen and subscribe on Spotify.
Watch and subscribe on YouTube.
Kevin Angle: Today we're going to talk about robots. They range from Mars rovers to useful tools that clean our homes and torment our cats. Or maybe I should be referring to them as our future robot overlords. What are robots, really? And how does the rise of data availability impact their development? I'm Kevin Angle, senior counsel in the Data Strategy, Security & Privacy practice here at Holland & Knight. And my guest today is really the person who introduced me to technology and innovation. Most kids don't have pictures of robots hanging outside their bedroom. I did. The robot was named Attila after the Hunnic warlord, and it was because of my guest, Colin Angle, who led the team that built it. He's the co-founder of iRobot Corporation and was its longtime CEO. He's a leading figure in robotics, and he just so happens to be my brother. Welcome, Colin.
Colin Angle: Hello.
Kevin Angle Hello. Thank you so much for joining the podcast. So I want to start with what you think might seem like a simple question, but may actually be more complicated than it seems, and that is the foundational question of what is a robot?
Colin Angle: Well, sure. I mean, "robot" is a word and it has probably many different definitions depending on where you've come from. For me, I started off defining "robot" as a machine that perceives its environment, it thinks about what it has perceived and then physically acts based on that data. That served me really well for quite some time. But with computational assets and sensors becoming cheaper and cheaper and cheaper, more and more things could actually fit that rather mechanical definition. Are garage doors robots? You know, things that don't necessarily feel like robots are suddenly pretty clearly fitting into that definition. So I had to go and rethink, and probably my best definition of what is a robot today is, well, it's a machine that you feel compelled to name. Because I think that part of the essence of being a robot is a thing-ness deserving of a name and, again, that is imperfect as well. Some people name their cars, but that's what I got for you.
Kevin Angle: Part of the reason I was asking this question is people are calling stuff robots now that, you know, when you were building stuff in the 1990s, I wouldn't think of as a robot. And that's like your chat bot or ChatGPT or, I was looking at another one of these sorts of algorithms where the chat bot had a particular name and you were interacting with a named chat bot. I mean, is that?
Colin Angle: So that there's a whole — you know, the word "robot" definitely took on a, sort of an orthogonal meaning into the land of autonomous virtual agent of some kind. And so the need for physicality was stripped out as people wanted to describe some kind of chunk of code which could operate autonomously, you know, and I just feel that's wrong. I'm not going to change it. Because in my world, there's the physicality part of robot is tied to its essence.
Kevin Angle: I think for our purposes today, we'll talk about physical robots rather than your chat bots, etc. An issue that I know you've thought a lot about is practical robots versus what you might call the robots of science fiction. That picture that I mentioned at the beginning here when I was saying I had a, you know, picture of a robot hanging outside my bedroom that was Attila and it looks like a bug. And what, why was that?
Colin Angle: You know, Attila was more on the side of research robots, so I wouldn't necessarily give it the moniker "practical," but it did exist in the real world. And it existed because, you know, at the time, the ability to create a physical machine that could perceive its environment and think on what it perceived and move through the world, there were very few. My academic journey took me through an amazing place called the Artificial Insect Lab at MIT. We called ourselves the "AI Lab," where "AI" stood for "artificial insects." And the idea was we embraced insects because, well, they obviously don't have giant supercomputer brains, and yet they're wildly successful. And so that if we're trying to create robots that can actually succeed in our world, why not start with something that doesn't require a belief in human cognition delivered on a silicon chip. So that line of thinking brought us to building a particular class of robots, and we made my first walking robot, which was the predecessor to Attila robot you had a picture of, which was able to walk in a very intelligent way across very complicated terrain, using a grand total of 256 bytes, not kilobytes, but bytes of RAM, and an eight bit microprocessor based on a very different approach to thinking about what intelligence was. And that led to the field of robotics suddenly becoming a lot more interesting, from the perspective of solving real-world problems. And, you know, Attila was a bit of an unfortunate successor where I tried to see how far could I push this concept and, you know, created a wildly overcomplicated but really cool robot that was great in photo shoots and much less great at solving core human problems.
Kevin Angle: It was cool in show-and-tell too.
Colin Angle: And in show-and-tell, yeah, I mean, no doubt it was high on the cool factor scale. But, you know, as we think about practical robots versus science fiction robots, it gets at are you solving a problem where the value of the robot that you're creating is significantly higher than the cost of the parts that go into building it? And if you can do that, well then you can build a robot that can be replicated and become something that has real impact and scale.
Kevin Angle: I'm speaking for you, but that seems to have been along the same lines of the approach you took at iRobot, where you built simpler devices that actually could accomplish things in the real world.
Colin Angle: Well, I mean, we were an entrepreneurial startup business, which was basically completely unfundable for its first eight years of existence. And in that crucible of economic survival, the idea that "OK, how do we do this?" led to the development of a very pragmatic approach as opposed to an academic or a fantastical approach to building robots. And we did everything from robots that would go into oil wells and stimulate the production of hydrocarbons to robot toys where we could put a little bit of intelligence and actuation into a toy and create a more engaging baby doll. Certainly Roomba ultimately was the thing that emerged from that crucible with a very, very high value-to-cost ratio, and led to the growth of iRobot from, you know, something that felt a little bit like a science experiment into a very exciting and viable business.
Kevin Angle: Roomba, for the listeners, for those who don't know, is the circular robotic vacuum cleaner that was very good at tormenting cats, as I mentioned, as well as cleaning your room.
Colin Angle: Yes.
Kevin Angle: So Colin, I have a real big picture question that I think is fun and ultimately quite pertinent in the long term. Should robots have rights? Are we there yet? Will we ever be there?
Colin Angle: OK, well, let's just for the record, today, no. But we certainly have seen Hollywood portrayals of robots, Number Five from "Short Circuit" or whole dramas where the evil bad thing was the robot facing being turned off.
Kevin Angle: You're dating yourself with "Short Circuit," by the way. I have to point that out.
Colin Angle: Commander Data now that I'm dating myself again. But how about this? The YouTube video of Boston Dynamic's Atlas robot being pushed around with a broom, evoking cries of robot abuse because people didn't like to see someone trying to knock over this robot with a broom. As soon as you get robots that feel alive to people, suddenly your question becomes more viscerally real. As a person who builds robots, where does that put me? You know, imagine building a robot and suddenly the world decided it is now immoral for me to turn off the robot. So what does that make me if I'm the creator of such a thing? And maybe if you don't like the answer that you might say, maybe you need to go back and say it's OK to turn off the robot.
Kevin Angle: Yeah. I mean, you made this point earlier about "What is a robot?" It's something you name, right? We're personifying. We're creating these robot people, and we're imbuing them with human characteristics in ways that could be challenging in the future.
Colin Angle: I mean, teddy bears get thrown out.
Kevin Angle: One thing that has changed, certainly in the past few years, has been the availability of large data sets and, in particular, large data storage, even cloud computing. How does that change, if it does, the paradigm for creating robots?
Colin Angle: Great question and not a simple answer. I think that there's been a step change in what we can do with data storage and cloud computing that is happening right now. That would be generative AI and what opportunities exist to embed that hype in an approach to intelligence into machines. But before we get there, I mean, I think that there's value in looking at a little bit of the journey. When Roomba was first launched, it didn't know where it was, it didn't understand really what it was doing and relied on heuristics to move around your home and clean where it could get to. And that worked pretty well, but it was a long way away from what people really wanted Roomba to do. And as Roomba was able to learn more about its environment and then start understanding what rooms are, understanding what objects were, then the robot could be better controlled at, you know, clean under the dining room table, clean the kitchen, clean the bathroom, and do those different missions differently, and ultimately do those different missions similar to how you would want them done. The irony was solving the general problem of "clean my house" is actually, to a consumer, probably requires the largest amount of trust and yet is the easiest thing to do, whereas the first step in creating a trusted relationship between a consumer and the robot is to start small and grow up. And so if you could, say you get your Roomba, you say, "Hey, clean under the kitchen table. I just ate," it does that well, then maybe you let it clean the rest of the kitchen, maybe you then let it clean more of your home. And that would be a more logical progression to build trust in the capability of the robot. But technically, it's, it's a much harder problem than where we started. So over the last few years, we started to have robots that can do just that, that can understand enough about the home, enough about how you refer to your home and what you want to have happen to logically travel a trust-building journey to have a good relationship with the owner of the robot and owner of the home. And that's really cool. Now, along the way, suddenly the robot knows a lot about your home, and what it knows could be used to do other interesting things in your home to make your home ultimately sufficiently aware of its state that it could have its own mission of trying to take care of the occupants that live inside the home, and it creates a larger and larger opportunity for what the role of robots ultimately is.
Kevin Angle: So that's really interesting because as you know, I'm ultimately a privacy attorney, right? So I think a lot about the, you know, privacy of people's homes and, and data sharing. And part of the tension I want to maybe tease out a little bit is that's really cool, right? And to the extent the home can be confined to itself and learning about its owner, or the robot learns about its owner, you know, you can contain that. But does that information need to be communicated to third parties to become functional?
Colin Angle: You know, to the extent that I just described, the answer would be no. And, you know, just to dive down into the privacy side, which was a very interesting part of our business, you know, I think it's worth telling at least a small anecdote. iRobot really led with privacy. And, you know, we adopted GDPR early, we applied its principles globally even though we didn't have to. My motto was the only information that we store offboard the robot would be, first off, encrypted — and transmitted encrypted and stored encrypted, and you have rights to delete, but also fundamentally uninteresting. Meaning that, to the extent where you start a map of your home, we weren't storing images. If you went and defeated all of the appropriate security measures, you would discover that I had a rectangular shaped region called "kitchen," and inside that there was a rectangular shaped region called "table." By only storing the information that the robot needed in order to do its job, we could stay out of trouble and make sure that we appropriately deserve the trust of the people we were serving. Now, at the same time, some of our competitors, were doing much more than that. And were, you know, streaming video on, you know, in the clear up to the clouds during that, and had no architectural approach to either privacy or security and were called out for their lack of safeguards publicly in reputable media channels. And ultimately the customer didn't care. So, you know, this whole idea of "one day privacy will become important, just not today," was definitely the world that we probably still live in. And it's unclear when that's going to change. People get very, very upset for a very short amount of time when it comes to privacy issues. And, you know, I still defend and believe we are approaching privacy the right way, but there's absolutely a cost to privacy, and in a competitive world, doing the right thing does put you at a disadvantage, and the consumer doesn't seem to care.
Kevin Angle: I wanted to follow up on one other thing you said, which was, you know, about the robot recognizing the dining table or the kitchen table, and the human being able to say, you know, "Hello, robot, clean under my kitchen table." One thing I've been paying attention to recently — you know, obviously, the next big thing that all lawyers are talking about these days is generative AI and large language models and visual language models and all those great things. And, one of the cool things I saw recently was people connecting robots to LLMs and to BLMs and being able to say, potentially, "Clean under my table," and have the robot connected to enough, you know, data and algorithm and so forth that it can understand the words and then it can understand the medium that it's looking at and do that. Do you think this is the next big thing for robots? Is this, is this interesting?
Colin Angle: Oh, absolutely. And I, you know, in my prior commentary, I sort of stopped prior to generative AI and I'll jump over that line. And, you know, I think there's two big areas where generative AI, LLMs, etc. are going to have impact. The first is in user interface. A very small percentage of the missions that Roomba would run were triggered by voice. And that is because traditionally, you know, using smart speakers, you have to get the syntax correct for the robot to know what to do. And, you know, Roomba has probably the largest skills ever developed for smart speakers as we tried to imagine all of the different ways someone could say, "Clean under the kitchen table." The challenge was even with a single digit failure rate, you would frustrate people and they wouldn't do it. Now, with large language models, suddenly there's at least the promise of that failure rate going to something close to zero, is there. People want to talk to the robot, and so the ability to use voice to control robots, or more generally smart devices, is suddenly, I think, on the cusp of fundamentally changing. And, there's certain things where the phone makes sense, but most of what we would like to do in interacting with our home suddenly is doable with voice and that's different. And that means that the home, or robots in the home, can be more easily tasked to do more interesting things. And I think that's a game changer for at least consumer robots. And the second is, well beyond the interface, the promise of AI models to allow the control and intelligence of the robot to take on new dimensions also ushers in the promise of a new category of robots that are much more aware of their surroundings, able to do more sophisticated actions and able to have much more sophisticated interactions with people. And full disclosure, I am starting a new business at the intersection of consumer robotics and generative AI. So I'm all in believing that the new opportunities that exist here are exciting and will create new industry.
Kevin Angle: So this might not be the excitement that you're imagining, but one of the things that makes me excited as a lawyer, we fundamentally, when we're talking about algorithms, have the problem of understanding what the algorithm is really doing, right? So you're basically throwing lots of data at an algorithm and, you know, creating algorithms and there's a lot of difficulty in really understanding how that all works. One of, one of the cool things as a lawyer was if you really are using, you know, large language models, there's the potential that you could ask the robot, "Why did you do this?" and the robot can tell you. The confidence in the accuracy of the answer, I'm not sure how high that is, but that, that is at least interesting that you have the potential for the robot to communicate back to you and understand some of the reasons in words.
Colin Angle: Sure. You know, I think that the rate of change and development and raw intelligence in these models is changing at an unbelievable rate where weeks matter as far as capability development goes, and, you know, trying to figure out, "What are the risks?" and "What should we be doing?" and "How do we take advantage?" and "What should we be worried about?" are all very real questions. And, you know, what you're saying, you know, "Why did you do that, make that decision?" is one of the things that we're going to want to try to understand as we try to figure out how this new industry is going to develop. You know, are there other, you know, watermarks that that should be somehow embedded where a decision made by an AI is known to have been made by which AI, you know, decision quality. It's a very interesting world, and simply asking the AI, "Why did you do that?" you know, if there was any malicious intent, it's not going to be the thing that gets you the answer you're looking for. But, yeah, it is at least a step in a good direction.
Kevin Angle: Yeah, it at least could potentially make it more understandable.
Colin Angle: Again, I think that as I think about these AIs using the metaphor, even if you don't like it, that the AI is a person and if you ask a person, "Why did you make that decision?" the type of answer you get from the person is in the same ballpark as the type of quality of an answer you might get from one of these machines. And you're not going to like that metaphor because I think people are lousy at trying to explain why they did things.
Kevin Angle: And this is why get lawyers involved here, Colin, because they can properly cross-examine the AI to get the truth.
Colin Angle: And you know, and there may be more to what you just said than you realize.
Kevin Angle: Well, let me ask you one last question. And, this is for the lawyers who are listening to this. How can lawyers foster innovation? And for the record, I am not accepting the answer unless you feel very strongly of fewer laws and regulations because that's, you know, too simple possibly and quite probably even wrong. But you know, from your perspective, what can lawyers do to foster innovation?
Colin Angle: So it starts with what role do you think lawyers should have in fostering innovation? Because, on the one hand, I brought counsel, you know, in-house to iRobot to build a IP portfolio affordably, such that iRobot would have a right to practice. And I think that another area where lawyers can foster innovation is helping companies manage the risks around new, using some of these new tools, like large language models, because it's complicated. And the data and some of the benefits of using ChatGPT to help generate code for your business can be very complicated for engineers. And the easiest thing to do is say, "Don't use it," because the IP ownership of the code that is created, or the unwitting contribution into the cloud of code that is the IP of the company, are certainly, on the surface, bad things. But if the lawyers can help balance those bad things with the good things associated with the increase in productivity to ensure that your engineering team can compete on a global scale successfully. You know, you shouldn't rely on the engineer to make those calls. And I don't believe the simplest "don't do it" response from corporate legal is the right answer either. And so having lawyers that really appreciate the trade between the risks and the benefits of use of some of these new technologies is an area where, I think, lawyers could do a lot of good, or bad if they get it wrong.
Kevin Angle: All right. Well, thank you so much, Colin. This was this was really interesting, and I appreciate your time.
Colin Angle: Sure, my pleasure.