AI hot takes and debates: Autonomy

Jerod:

Welcome to the Practical AI podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.

Jerod:

Now, onto the show.

Daniel:

Welcome to another episode of the Practical AI Podcast. This week, it's just Chris and I. We're joining you for what we call a fully connected episode, which is one without a guest, where Chris and I just talk about a certain topic or explore something that's of interest to us or that we've seen in the news or help kind of deep dive into a learning related resource. I'm Daniel Witenack. I'm CEO at Prediction Guard, and I'm joined as always by my cohost, Chris Benson, who is a principal AI research engineer at Lockheed Martin.

Daniel:

How you doing Chris?

Chris:

Hey Daniel, I'm doing fine. Looking forward to today. We have something fun planned

Daniel:

This should be fun. Yeah, so for our listeners, we've been in the background of the show really talking about, well, a lot of things about the future of the show. Some things in terms of, you know, things like rebranding and updating our album art, which you'll see soon, new intros, outros, but also new kind of focus areas for content on the show, really being intentional about some of the things that we're exploring. But if you've been listening to us for a while, you also know that Chris and I like to have fun together and explore things in interesting ways, just the two of us. Some of our team here came up with the idea, well, what if we took some shows and had a kind of AI hot takes and debates type of show?

Daniel:

This is the first iteration of whatever we'll end up calling this AI hot takes and debates, that's a good one. But basically the idea here is that there's a topic which is part of the wider conversation around AI, people are divided on. And we will take some of the arguments for one side and the other side. And it really doesn't matter what either Chris or I actually think on the subject. Maybe some of that will come out in the discussions.

Daniel:

But really where Chris, you take one side and express some of those arguments for the one side. I take the other side, express some of those arguments and we discuss it together, hopefully exposing the audience to both sides of a topic. Also it sounds like fun. What do you think, Chris?

Chris:

I think it sounds like a lot of fun. And of course, as we were coming up with the first topic, you might say I throw a hand grenade into the mix.

Daniel:

Good one, good one.

Chris:

There you go. And that it's a topic of particular interest to both of us, especially to me. But it's important that I say we're gonna talk a little bit about autonomy in warfare and in some areas outside of warfare. But I really wanna point out, I really wanna emphasize that, you know, Daniel and I don't necessarily take hard side on either topic. We're we're kind of it's an assignment to take one side or another.

Chris:

But I also wanna emphasize because I work for Lockheed Martin, who's a defense contractor that I'm definitely not in any way representing a Lockheed Martin perspective. This is just a fun debate issue that we're having today. So I I don't often have to do disclaimers, but I I felt given the topic that was important today.

Daniel:

Yeah. Well, and I I was gonna say autonomy means a lot of things.

Chris:

It does.

Daniel:

And it can be applied in a lot of industries. And so I think some of the arguments that we're gonna talk about here are equally applicable to autonomy in, you know, self driving cars, you know, airplanes, surveillance systems, like all of all of these sorts of things where, you know, manufacturing, whatever it is, there are similar concerns to those that we're talking about. Obviously, if you have autonomous weapon systems, there's a particular kind of life and death element to it, and it intersects with conflict around the world. In the YouTube videos I've watched of interesting interesting debates, I I'm always I've learned over time that you you should frame this sort of thing as a question or frame it as a Not a question, I should say, a statement, one side taking the affirmative, one side taking the negative side of that. And so the statement, if you will, is autonomy within weapons systems or military applications is an overall positive thing for the development of the world and safety and resolving conflict all of those things.

Daniel:

So Because Chris is maybe a little bit closer to this, I'm going to take the affirmative side of that, which is, hey, autonomy could actually provide some benefit and more safety and less loss of human life, more ethical sort of application. And Chris is gonna take the opposite side of that arguing that we shouldn't have autonomy in these sorts of systems. Does that make sense, Chris? Did I explain that in a reasonable way? I'm not much of a debater.

Daniel:

So

Chris:

No. That sounds fine. We will go with that. I wanted to mention to the audience, it was actually someone who kinda got this into my head a little bit, but we threw a post was one of our former guests on the show back in the beginning of 2,024 on February 20. We had an episode called leading the charge on AI and national Security, and our guest on that show was retired US Air Force General Jack Shanahan, who was the head of Project Maven and the DOD's Joint AI Center.

Chris:

He actually founded it. And I follow him a lot on LinkedIn, and he had highlighted two papers written. One was kind of a pro autonomous weapon systems paper that was written by professor Kevin John Heller, who is professor of international law and security at the University of Copenhagen Center for Military Studies. And he's also a special adviser on war crimes at the International Criminal Court. And he wrote a a paper called the concept of the human in the critique of autonomous weapons.

Chris:

And subsequent to that, after he wrote that, two other individuals, L. K. Schwartz, who is professor of political theory at Queen Mary University of London, and Doctor. Neil Renick, who was also an expert in this area. He's a researcher at the Center for Military Studies at the Department of Political Science for the University of Copenhagen wrote a counter paper on this.

Chris:

And that counter paper is called on the pitfalls of technophilic reason, a commentary on Kevin John Heller's, the con the concept of the human and the critique of autonomous weapons. And that was a very recent paper, May twenty third of of this year, two thousand twenty five, and the original paper written by Heller was 12/15/2023. And it just seemed like an interesting topical thing to jump into. And like I said, we're not gonna hold ourselves bound by the strictness of their topics, but mainly just have fun with it.

Daniel:

Yeah. Yeah. Have fun with it. And and we should say at least our plan for these types of conversations between Chris and I are that one of us would express core arguments of one of the sides of of these debates, But then maybe open it up and just have some casual conversation about each of those points, kind of how we think about it, the validity of it, that sort of thing. Maybe that's just because I selfishly don't wanna come off as a bad debater and not have a good rebuttal for my opponent.

Daniel:

But also Chris and I are friends, so that

Chris:

makes it And it's all for fun in the end.

Daniel:

Cool. So again, the kind of hot take or the debate of today is that autonomy is an overall positive in terms of application to autonomous weapons systems or in military applications. But I think as you'll see, of these arguments might be able to be extracted to other autonomous systems like self driving cars or planes or maybe surveillance systems or automation and manufacturing, etcetera. So I think we can start with the first of these and I'm happy to throw out the first kind of claim here. And really I'm basing this claim on, so again, I'm kind of on this affirmative side and taking part of that article by Heller and really highlighting.

Daniel:

So my claim is that autonomy within conflict and weapon systems is or could be positive because real human soldiers or humans that are part of any process are biased, emotional, and error prone, right? And if an autonomous system is able to outperform humans in adhering, for example, to international humanitarian law, then that actually minimizes harm of these systems, right? As opposed to being solely reliant on humans who are biased, again, emotional and error prone. So part of the article that motivated this talks about how decision making in particularly high impact scenarios like a combat scenario is distorted by these sort of cognitive social biases, negative emotions, actual limitations of humans in terms of how much information they can process at any given time. And one of the quotes that I saw from there is, very few human soldiers in a firefight actually contemplate the implications of taking a life or extrapolating that to other things, maybe humans flying planes or humans flying cars or humans doing manufacturing activities.

Daniel:

They aren't contemplating the implications of every action that they're taking. In some ways, they're just trying to survive in those environments or get through the day or deal with their emotions. So that's kind of the first claim. What's your thought, Chris?

Chris:

Well, I think the opposing viewpoint on that is gonna be really that we humans value our ethical and moral judgments that we make. And we put a lot of stock in that and that our ability as humans is really important. So, you know, the argument against against the fact that, you know, kind of the fog of war takes over for the individual, They're not thinking about higher thought is the fact that if you on the other side is is we do we do obviously want humans in combat to have the ability to to make moral judgments and that does happen. You know, where where you have someone thinking, you know, that's a child out there that has the weapon and and and I I'm just not comfortable no matter what happens. I'm not comfortable doing that.

Chris:

And that's the kind of moral judgment that we as we humans really value. And we don't necessarily, you know, trust autonomy to be able to make such distinguishments in in the near future on that. And so, you know, the notion of taking a life and death decision out of a human's hand is something that that we really struggle with. And and it creates ideas like the accountability gaps that go along with that, and ensuring that as horrendous as war is that there's some sort of ethical core to that that is at least available to the common soldier that's making decisions in the heat of the moment.

Daniel:

Yeah. This is a really interesting one, Chris, because I I also I mean, I know you and I have talked about this and you're also a pilot and thinking about airplanes, been watching a lot of with my wife, there's this I think it's been produced for like a very long time, but it's on streaming now. This series of shows, I think it's called Mayday Air Disasters. And it just highlights a different air disaster, air commercial airline disaster that happens, has happened historically and kind of what happened. And they go through the investigation, and some of the clips are kind of funny.

Daniel:

Not in terms of what happened because, obviously, they're tragic, but in terms of how it's produced. It's very, very literal, and some of the acting is maybe not top notch. But what I've learned through that show is I have been surprised at the amount of kind of information that pilots, let's say, have to process and given a certain state or fatigue or even just unclear leadership, like who's in charge in a cockpit can lead to just very irrational decisions. And so I do understand the argument, whether it's in terms of, weapon systems or flying airplanes or driving cars, certainly people aren't always making those rational decisions in terms of how they process information.

Chris:

I totally agree. There is, know, but there's also it's kinda funny as we talk about that and talk about the, you know, the flaws that that humans have in in these processes. And and and we certainly do in that. There's also kind of the notion of when you're processing that much information and knowing you have a lot of information you have to put in contextually of the in the in the emergency of the moment. And pilots do that quite a lot, you know, as things happen that, you know, a, there's that there's that uniqueness of the human brain being applied to that, that we don't yet, you know, feel that autonomy can take that all the way.

Chris:

And and certainly, if you pull the flying public right now, you know, in terms of airliners and stuff, and other places where they would put their own lives into the hands of autonomy, most people still, and I don't have a particular poll in mind, but having seen a bunch of them over the last couple of years, would argue that they that they are not comfortable not having a human in that. And once again, that's, that is the the notion of having someone that cares for them, that is in control, that you have trust with, you know, a tremendous amount of expertise, being able to, to to ensure that a good outcome occurs. And, and I think that that's an important thing to accept and recognize that that people are not there yet. They're not to the point in general. Now I will I will note as someone who is who is in the military, industrial, you know, complex world, that there's a lot more autonomy on the military side than the civilian side.

Chris:

But I think there is we need to get to a point where where our autonomy can match the expectations that we already have in our human in our human operators.

Sponsors:

Okay, friends. Build the future of multi agent software with Agency, AGNTCY. The Agency is an open source collective building the Internet of agents. It is a collaboration layer where AI agents can discover, connect and work across frameworks. For developers, this means standardized agent discovery tools, seamless protocols for inter agent communication and modular components to compose and scale multi agent workflows.

Sponsors:

Join Crew AI, LangChain, Lambda Index, Browserbase, Cisco and dozens more. The agency is dropping code, specs and services. No strings attached. You can now build with other engineers who care about high quality multi agent software. Visit agency.org and add your support.

Sponsors:

That's agntcy.org.

Daniel:

Okay, Chris. Continuing on in relation to the arguments for autonomy, meaning AI or computer systems totally in control of things like weapon systems, or other things in a military context or other autonomy. So the next argument that I'll put forth in terms of pro autonomy or the affirmative here is that we've been talking about a lot about kind of human morality or their intentions infused in that. The claim here would be that responsibility essentially remains like, there's no unclear responsibility here. So some people might say, oh, if you have an airline crash and it was the computer's fault, right, then where do you put the blame?

Daniel:

And here I would say, or in expressing this kind of part of the argument, the argument would be while responsibility still is clear, it just remains with the designers or the commanders of these systems, just as with other kind of automated systems. And if you think of automated weapon systems, they don't disrupt these kind of legal and moral frameworks because it's the designers of these systems that actually impose those frameworks or should impose them within the systems. There's no, in this case, the argument would be there's no significant difference between kind of human soldiers or autonomous weapons in terms of criminal responsibility because responsibility lies with those who design, program, or authorize deployment of these machines, not with the machine itself. These are just instruments of the will of their developers and those responsible for employing them. So that's the argument.

Daniel:

What's your thoughts? And I

Chris:

think the counterargument on that is gonna be that, you know, reducing civilian casualties alone through automation is not enough and recognizing that automation can follow rule sets like the like the laws of war and other other applicable laws, that we don't want to lose the notion of of our moral of moral decision making, and the emotions that go with that. We have empathy. We have the recognition of moments where under a particular rule or law, an action might be authorized, but that there might be a kind of a human notion of a moment of restraint because you recognize the distinction in a you know, that that is complex. The world is not simple. It's not black or white.

Chris:

And that these kind of qualities of recognizing the situation that there are lots of ancillary concerns that aren't necessarily covered under strict rules of engagement, that those matter, and that we care about those. And that the the you know, if you put in a model that's trained on reacting to specific, you know, tactical situations and is is following the rules, and so it's entirely legal, that they still aren't able to recognize those complexities and they they're not able to have that empathy and that you can't replicate that with present technology or near future technology to to take the human out of the equation. So it it's one of those it kinda depends on how you're how you're seeing that in terms of do you want to take that those human qualities entirely out of the equation and rely on autonomy? And that's I think that's one of the great questions of our time.

Daniel:

Yeah. I I have a follow-up question here, Chris. Sure. Maybe it's a proposition. I I don't know.

Daniel:

But you mentioned how many rules or or things that a human intervening in a situation, like they they make kind of decisions in gray areas, things are maybe fuzzy. I'm wondering if, like here, we're framing this very binary, right, automation or or not automation. One of the things I found just practically outside of this realm and in putting in automation with customers of ours that are in manufacturing or healthcare or wherever, is there could be an automation and you can analyze the inputs to that automation to understand if things are similar to things you've seen in the past or different from those you've seen in the past. So for example, maybe you're processing clinical documents for patients, right? And there's a set of scenarios that you've seen quite often in the past and you've tested for, and you have a certain level of confidence in how your AI system will process these clinical documents, maybe make some decisions and output a notification or an alert or a draft of a email or whatever those automations are.

Daniel:

Then you could receive a set of things on the input that is very dissimilar to what you've seen in the past. Oh, this is a unique situation for this patient in healthcare or something. You kind of don't know what the AI system is gonna do, right? And often we would actually counsel our customers and our partners. Well, that's an area where maybe you wanna flag that and just not have the AI system automated, but those are the things maybe that should go to the human.

Daniel:

So I just wanted to call out that kind of scenario where we're talking in a very binary way, like, it automated or is it not? There's kind of this area where it's kind of automated and kind of not. And you're looking at the situational awareness of what's coming into the system in light of the distribution of data that seen in the past.

Chris:

You're posing a really interesting question. And I guess I would frame that as you're looking at, you know, these different these different types of outcomes and and and based on the data that you're doing that that within these structures, there are there's this kind of notion about where the human fits in real in relationship to that. Mean, you kind of raise that up about, you know, the autonomy raising those issues up to the human and stuff. And that's, I think that's a big part of figuring out in the path forward how what that relationship looks like. Because I think I think it varies quite quite widely across use cases there.

Chris:

And industries what you're looking at. I know in in my industry, there's the notion of kind of a human in the loop versus a human on the loop. And so there is there is autonomy where humans and and the autonomous systems are working as partners, in a very direct way. And some people might say the human perspective on that is to the autonomy kinda makes the human who's in the loop kind of superhuman by giving them a lot more ability to process the right information at the right time to make the right decisions. And then contrasting that against the notion of a human on the loop, which kind of is kind of what you mentioned, where it raises something up, that might be an exception or a special case or a specific thing in a larger quantity of data that you're trying to bring to attention as a as a key finding.

Chris:

And they they suit different use cases, but it's not always very clear what those are when you're trying to design a system that does that. So I think I think there's a lot of room for both. I think that right now we tend to look at human in the loop more often. But I will say as someone just as making an observation about the industry that I'm in, that with in this particular case, with the with the the nature of warfare is changing rapidly and the speed of war is chain is increasing rapidly. And we're challenged with what can a human do if if things are speeding up all the time.

Chris:

So, yeah, you've raised some really great questions there in terms of how to solve for some of these, especially considering what we've talked about about, you know, the ethics versus the deferring to the autonomy.

Daniel:

Yeah. I'm glad that you brought up, like, speed and efficiency here. I guess that's one while you were talking, that's one thing I was considering. And and maybe I'll give just a couple examples. It could be that in, let's say, an aircraft situation, I'll go back to that.

Daniel:

Maybe it's just because I've been watching that show with with my wife. But in in that situation, you are may maybe a situation comes up and you realize the automated system should alert the human operators of something. Well, if the human operators haven't been tuned in to what's going on with the system, right, in this case, the the aircraft, that alert could happen and they could have to gain so much context and so much information to understand why the alert is happening, what systems are malfunctioning, etcetera, that by the time they catch up, right? Then the plane's crashed or something like that, right? And so in that sense, the speed efficiency piece is really relevant.

Daniel:

Was also thinking in a maybe less life or death type of situation, if you're using these vibe coding tools and you're creating x web app or something like that, you're creating this new system, this new project that you're working on, you could have those automated systems working for lots of time, creating lots of things, then, oh, all of a sudden, there's this error that the Vibe coding tools can't resolve. Well, then you step into that and you're like, I don't know anything of what's going on here. What files have been created? How is this configured? What is really the context of this error?

Daniel:

And it could be more inefficient for you to actually debug and deduce that than it would have been the overall efficiency kind of would be less than if you had just written all that code. Now that may not always be the case and different scenarios happen, but that element of speed and efficiency really strikes me as a key piece of this element of what we're talking about.

Chris:

Yeah. It's funny. Your your example really resonates with me because I think that's as I've tried vibe coding myself and stuff compared to more, you know, more conventional programming. And I think I've run into that. And it kind of depends.

Chris:

There's a lot of context if you're not only running into problem, but you have to decide to some degree, there are there may be extraneous reasons to structure to pick certain tools and certain things. And if you're vibe coding and and essentially the model may have a lot of freedom to choose how it's structuring, you know, what you're building and the tools that it's using and things like that, that that may have an impact. And I that's I think that's where I run into vibe coding challenges myself is giving the model the autonomy to make choices that I really think I should, whether I'm right or not, whether I think I should be making myself that may have to do with performance or constraints or future team skills or whatever else we may have. So a great point you're making there.

Daniel:

Okay, Chris. The the third affirmative claim that that I'll express from from the side of, you know, pro autonomy in things like weapon systems conflict and maybe other other areas of autonomy is that autonomy could actually reduce harm, casualties, suffering because autonomy actually results in greater precision or consistency. So banning these sorts of systems on principle may, you know, may cause more harm than good essentially. And, you know, you could kind of play this out. A couple of things that are said in the in the paper again that were kind of inspiring this conversation around by by Heller, is, you know, it it's only a matter of time before these systems maybe comply with international humanitarian law better than than human soldiers or, you know, the these systems, I would I think it it would be true maybe at least in certain cases now, these systems have more or have the potential to be more precise than even the most kind of most precise non autonomous systems in terms of targeting or performance or operation in some sort of way.

Daniel:

So the argument here is that a kind of outright ban would actually result in more suffering rather than less suffering. So how does that strike you?

Chris:

Well, before before I make a counterargument against it, it's interesting that it's something that you noted there. I'll just throw out a two second example. You know, the the the Russian invasion of Ukraine has has been going on now for for several years. And the one of the you know, there there is a as reported in the news, there's quite a bit of small autonomy drone warfare going on. And that and one of the things that were that we've seen there, a combination of both human, I think this will kinda lead into my side a little is that you may have drone operators flying, but often when they make the strike, the final bit of strike has been made autonomous, where the actual hit itself in the last last couple of seconds will be autonomy driven to your point about, you know, precision and such there.

Chris:

That's a little bit you definitely have a human in the loop through most of that operation and then that human has to decide to go fully autonomous for the precision thigh. But, you know, on the on the counter side of that, it's important to note that as you are bringing more autonomous capabilities, especially when you're moving the human out of a direct loop and to be just kind of human on the loop where they're kind of overseeing the autonomy versus directly, you know, directing the weapon itself. There's a dehumanizing aspect to that. There is this notion about I have a tool. And I'm as a human, I don't even need to drive my tool to the bitter end and to strike my objective.

Chris:

I can just let the machine take it home. And I think there is something that is potentially revolting to a lot of people about the notion of kind of the sense of devaluing a human life, that even though that may be your adversary, that your your mission is to go eliminate that adversary, that it's still a human being and that you're still just outsourcing to your machine to go take care of your current problem, which is your target. And that is something that that that is a big struggle in this ethical conversation that we're having is is even if it is your enemy and your adversary that you're trying to to address in the in the manner that you have to do it, does you know, what does that mean to outsource it? Does that make it, you know, even even more dehumanizing and inhumane to do it that way? So I think both sides have really interesting arguments.

Chris:

And I hope for listeners who've been listening so far, you know, seeing kind of that, there's not a right or wrong answer to either side on this on these issues. They both have pros and cons so far.

Daniel:

Yeah. Well, there's probably some people out there that would would take a, binary point of of I'm

Chris:

sure there I I

Daniel:

generally I I actually, you know, I I don't know if people saw the the movie Conclave, but but the the guy in the his sort of opening homily in in that talks about certainty as this as a kind of evil and and and potential harmful thing. I think there are some there's definitely at least some nuance to this from my perspective and that needs to be needs to be discussed. And it's good to see people engaging on both sides of this, whatever level of certainty you might have. But yeah, one of the things that came to my mind as we were talking about this, Chris, is so there's one side of it that I think is very relevant, which is this dehumanizing element, like you're saying, what are the actual implications? I have to stare at my enemy in the face.

Daniel:

I don't have to sort of see this happen. It sort of happens one step removed, which I think is very valid actually. I think that is quite concerning from my standpoint. Also, think the point that I was thinking of while you were talking was from a kind of command and control side or a leadership side, like those that are actually making some of those decisions, I think you could imagine and probably find a lot of scenarios where leadership has made a decision that they expect to be carried out by their troops or maybe in the enterprise or industry scenario, their employees, right? And either by way of miscommunication or by way of outright refusal or malicious intent, that plan is not carried out to a result that actually there is something suffers, whether that be human suffering or commercial suffering in terms of loss and that sort of thing.

Daniel:

So I think that there could be this element of if kind of you are in control of autonomous systems and those autonomous systems actually carry out your intent and that leadership that you're putting in or the command and control is well intentioned towards the greater good, then you can be a little bit more sure that those things are carried out. So yeah, I get the sense that this is part of the argument too in the paper.

Chris:

Yeah, I agree. And command and control is hugely relevant and important to these systems and not just in the military context, but in aviation and in all of the different places, know, it can be a factory, that kind of thing that that's one of the crux issues that that kind of puts all of these into the context in which they're being applied. And we also another thing to throw in there as well is is is, know, some of the legal frameworks that we have, both in the country that we're in, which is The United States Of America and elsewhere throughout the international community. There's a lot of constraining that we're trying to do. For instance, within The United States, you know, we've talked a little bit about subscribing to the international laws of war, which all modern countries are should be doing, you know, that that are engaged in in these activities because, you know, ethics is kind of built into a lot of that stuff to keep us on the right track when we are engaged in conflict.

Chris:

But in addition to that, like in The US, the the Department of Defense has directive 3,000 dot o nine, which is called autonomy and weapon systems. And it's far too detailed. We're not gonna go into it on the show here and everything. But if if this is a topic of interest to listeners out there, you can Google that out there and read the directives and kinda see where things are in present state. And there's a lot of discussion as we talk about autonomy in these contexts about do we need to update some of those to represent what is our current choices and our ethical frameworks about how we're seeing autonomy in our lives today versus at the point in time that that that was written and that's changing rapidly right now.

Chris:

So DOD directive 3,000.09 is another place to add some context to this conversation that we've been having autonomy and weapon systems. And then there's there's a little there's a something else to note is as we get into other industries that are, you know, things like law enforcement, things, you know, that that where where there can be conflict, you know, brought into this by definition, you know, where you're having a police action or something. What is appropriate there when you're talking about more of a domestic situation? And where do you want to go? There's so much that has yet to be figured out.

Chris:

And I really I really hope that listeners whether or not this is, anywhere close to the industries they're in, it still affects their lives. If you look at the disturbances and government actions that are balancing each other, regardless of what your political inclinations are, that we're seeing out there, you know, think about what's happening and what the responses are and what's appropriate and where do these new technologies fit into a world that is rapidly, rapidly evolving? Joy, I hope people are really thinking deeply about this and joining in on the decision making. It's a good it's a good moment in history to be an activist.

Daniel:

Yeah. Yeah. And if I'm just sort of summarizing maybe for you know, there's probably only a small portion of our audience, or I don't know, maybe it's small, that is actually involved in kind of autonomous weapons sorts of things. But I think for the wider group that are building AI driven products, AI integrations into other industries, there are some really interesting key takeaways from this back and forth discussion. And I'm just, in my mind, summarizing some of those, Chris.

Daniel:

One, from that initial point is that when you're creating AI integrations into things and measuring performance, to some degree you want to think about, well, is the actual human level performance of this? And that is usually and always not 100 accuracy or 100% performance, right? Humans make mistakes. And so when you're thinking about those implementations, whether it be in machine translation or other AI automations or knowledge retrieval, how would a human actually perform in that scenario and maybe do some of those comparisons. Absolutely.

Daniel:

Yeah, I think a second point would be, there is some responsibility if you're creating automations that lie with the designers and the builders of these systems. And so you should take that responsibility and take ownership of that. And then I think finally from that last piece of conversation, I love how you said it, Chris, that we should not dehumanize those that we're serving. That doesn't mean we can't use autonomy in any number of scenarios, but we should value human life and value those that are going to be using our systems and actually not try to distance ourselves, but have that empathy, which has the added side benefit that you're going to create better products for people, you're gonna create better integrations that they want to use, and you're gonna enhance their agency hopefully with AI. I don't know, those are a few interesting summary points maybe from our discussion.

Chris:

Absolutely. And and there was one thing I was wanting to note that this has been a really interesting conversation from my standpoint, in that, at the beginning of it, you assigned me the the role of kind of the the caution toward autonomy as opposed to the the full in, which being in the industry I'm in is actually I am actually fairly pro autonomy in a lot of ways. And partially because I've developed working in the industry, I've developed a sense of confidence, not only in the technology, but in the people doing it, because they're all very human themselves. But I found it was really instructive for me to remind myself to to to play the part of the caution side, just to remind myself about all these points that I really care about as a human. And, so I think that actually worked out much better, than than if I had been the one that, you know, kind of, pro all in on autonomy kind of kind of position on that.

Chris:

So I wanted to thank you. It's been a very, very good discussion on this back and And it's really got some, it's making me think especially as you would say something that I normally would be finding myself saying, and then I'm thinking, okay, well, it's time for me to think about that other point there, so I appreciate that. Yeah. Good thoughtful conversation.

Daniel:

Yeah, this was great. Hopefully it is a good learning point for for people out there. I'll remind people as well that, one of the one of the things that we're doing now is we're listing out some some good webinars for you all to join, the conversation, our listeners to join the conversation on certain topics. So if you go to practicalai.fmwebinars, those are out there. But also we would love for you to engage with us on social media, on LinkedIn, Blue Sky X, and share with us some of your thoughts on this topic.

Daniel:

We would love to hear your side of of any of these arguments or your perspective on these things. So thanks for thanks for listening to this to this original, our first try at a Hot takes and and debates. Don't

Chris:

forgive us.

Daniel:

Yes. Exactly.

Chris:

Our experimentation here.

Daniel:

Let's let's do it again, Chris. This was fun. Sounds good.

Jerod:

Alright. That's our show for this week. If you haven't checked out our website, head to practicalai.fm and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show.

Jerod:

Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the Beats and to you for listening. That's all for now, but you'll hear from us again next week.

AI hot takes and debates: Autonomy
Broadcast by