Inside America’s AI Action Plan
Welcome to the Practical AI podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.
Jerod:Now, onto the show.
Daniel:Welcome to another fully connected episode of the Practical AI podcast. I'm Daniel Wightnack. I'm CEO at Prediction Guard and joined as always by my cohost Chris Benson, who is a principal AI research engineer at Lockheed Martin. And in these episodes where it's just the two of us, we we try to analyze some of the things that have come out in recent AI news and maybe touch on a few practical elements that, will will help you level up your machine learning and AI game. And, really excited today, Chris, to talk about something that is very intriguing, somewhat maybe confusing for some people, exciting for some people, maybe producing a little bit of uncertainty, some Just a lot of mixed feelings maybe around around this.
Daniel:We're talking about America's AI action plan, which was just released by by the White House here in The US. So yeah, this just came out not too long ago. Obviously, we're just kind of, reading through the initial release and some reactions, but I say there's been some mixed feelings that it's kind of generally rolled back some things that have maybe been happening in The US around AI and regulation. Maybe introduce some other things that there are elements of the action plan that I find really inspiring and encouraging. So yeah, it's maybe an interesting mixed bag for many folks.
Daniel:But yeah, I'm sure it's come up in your circles and in discussions, maybe even with family or around town that America has an AI action plan. What are your initial thoughts?
Chris:Yeah, since the White House released it, I've definitely had people asking what I thought of it and stuff. And I know both you and I had read it a while back as it was released, and we're just getting to a point where we had a chance to to talk about on the show. And leading into it, I'm, of course, going to do my usual disclaimer Of course. That because we're talking about stuff that can tangentially touch government affairs and military, I am only speaking for myself as always and never ever speaking for any other organization and most especially not my employer Lockheed Martin. So I want to be very clear about that because I I do want to keep my job.
Chris:So this is this is just Chris Benson on the podcast. So like Good to know. Yeah. So this is it's an interesting mixture of a lot of things as you're alluding to. And like you, there are some things I was inspired about.
Chris:There's a fair number of things in there that are inspiring. But there's also a lot of stuff that either rolls back other aspects that we probably that I think a lot of people would agree in a nonpartisan context would be advisable, some of the protections in certain areas. And it is, I think, you know, my my single biggest gripe is it is a starkly partisan document in many respects. It feels like it was written by some folks that were that had some good ideas in there, but were being very careful to make sure it would fit in with the current political climate. And that as I read it, that was if I felt like I kept bumping into that filter, that the authors, would have had on it.
Chris:That that's my first impression, a highly filtered document with good and bad in it.
Daniel:Yeah. Yeah. Well, I'm interested, I mean, kind of to set the stage a little bit. Now we've talked about other things that have rolled out from the government directly from the White House on the show. I'm just looking back at 2019, if you remember those days.
Daniel:2019, there was a White House executive order on AI. This was during the Biden administration. And we talked about that on the show. It's interesting in that case, so if we kind of roll back the clock a little bit, for quite a while, has been this effort at NIST, the National Institute for Standards and Technology, around AI and AI risk management framework. That has some hints of regulation.
Daniel:It's a government thing, but it's not official on any kind of regulatory front. Then there was this White House executive order, which I think I might have said Biden, but I think this one that I referenced from 2019 was a Trump executive order. Now I'm getting all the sequence confused, but that's episode, what, '34. It's back a while. It's back a while.
Daniel:Think I was thinking, because I think we did talk about another executive order. We did. Anyway, there's been this kind of rolling series of executive orders back and forth on both sides, like you say, of the partisan line. What, from your perspective, maybe before we dive into the details, some of the executive orders that have come into place do have some type of regulatory implication in the sense that I remember one of them that we talked about, for example, around model builders and the requirement of certain sizes of models to be regulated in a certain way. In terms of this AI action plan, what is your understanding, if you do have one or don't have one, what does it mean that this is an AI action plan?
Daniel:It's not an executive order. Partially in my understanding, it's meant to not regulate. And so maybe it's just a document. But really, guess, what is the implication of this more broadly, do you think, in terms of its impact?
Chris:Yeah. I mean, it definitely reflects this current administration's approach well beyond AI to to a larger viewpoint. This one definitely is all about remove removing regulation, which I think is I think that there are places where where regulation could be beneficial in their places where regulation might get in the way. I think you have to pick carefully where you want to put that. I think and one of the other things that I've noticed here is that, you know, there's a lot, you know, there's a lot of verbiage in it that kind of on the surface, almost anyone would say, yeah, I'm for that.
Chris:But we also know if you look at at the politics outside of this specific document and and specific AI field, what free speech is depends, you know, in terms of interpretation, depends on who you're talking to. If you put a democrat and a republican in the room and talk about what free speech is, you may get some different answers on what that is and what's implied into that. And so it's one of those situations where we are we are left to interpret what the intent of some of these words means along the way. And it's almost impossible to do that without kind of taking in the larger political environment, which sadly, I think detracts from the document in a lot of ways, I think it could be better. I'll also note that it doesn't spit it there's a whole bunch of recommendations in it.
Chris:And there's no funding for any of those recommendations that is noted. And so it's interesting about in the sense of where do you go from here, whether it's a good idea or a bad idea, from my perspective, it's still left with well, great. Maybe I like that idea. How are we gonna do that now?
Daniel:Yeah. I think as opposed to some other maybe executive orders that had some direct implications right away, this is, again, action plan. I guess that's what I was meaning in terms of impact. Like what impact is this going to have other than being a document? I think if people are out there and maybe one implication that could be thought of is universities applying for grants, companies applying for SBIRs or other types of government interactions.
Daniel:I think in the current climate of those things getting approved, if it's AI related, it probably needs to kind of tie into this action plan for it to have a greater chance of success. Tying into that action plan kind of has this trickle down effect to all of these grants and innovation that happen across university and across small business and etcetera, etcetera. So there is maybe this trickle down effect, but it's not a sort of direct, we're going to fund this or that and there's this program, but maybe that is more the impact in terms of the climate of what is funded and how that affects businesses and universities or research institutions that get government funding.
Chris:Yep, I totally agree. It's kind of funny. We've talked about some of the ambiguity as we've gone forward into this and you know, about you know, and the good and the bad of removing regulation a little bit, the ambiguity of the free speech component of it. One of the things that I'll say that I I like the intent of is the promotion of open source and open weight AI, which are both explicitly, you know, pointed out in several points in the document. And I I know that we have long talked about that on the show.
Chris:And so I think that's one of those those, inspirational moments, though. There's not a lot of how we're gonna get there, I must say.
Daniel:Yeah. Actually and just to give maybe We've talked about a little bit of the impact. We've set it some stage. I love this kind of point that you're bringing up about the open access models, which of course I'm very passionate about. But maybe it would be helpful also for our listeners.
Daniel:There's a bit of a structure to this document. If you're listening to this in your car and you don't have it in front of you, of course, we'll link it in the show notes. But there's these sections and they're split up into pillars of the AI action plan. So kind of these key pillars. The first pillar is accelerate AI innovation.
Daniel:The second pillar is build American AI infrastructure. The third pillar is lead in international AI diplomacy and security. So there's multiple things under this first pillar. So the first pillar being accelerate AI adoption. And you've highlighted a couple of those things that might be more on the kind of controversial side related to removing regulations, specifically maybe even rolling back some of those things related to the NIST AI risk management framework, protecting free speech in AI models.
Daniel:Looking at this AI systems must be built from the ground up with freedom of speech and expression in mind and be free of top down ideological bias. There's that discussion of the free speech piece under the innovation. Then they go to this discussion, like you talk about, about promoting open source and open weight AI, which of course is one of those elements that I'm really excited about. Some of the recommended policy actions they talk about is ensuring access to large scale computing for startups and academics, partnering with leading technology companies to increase research and fostering this sort of environment that creates a supportive environment for open models because they obviously see this as kind of a part of the future, which I would of course agree with. With OpenAI becoming open again and releasing open models for the first time a couple weeks ago, I think that certainly is a trend that we're seeing.
Daniel:But yeah, I love that piece. Of course, again, it's this mixed bag of things that as you go through each of these pillars, you kind of have to parse through. What else stands out to you, Chris, in this kind of accelerate AI innovation pillar? There's a lot of interesting things there about world class data sets, the science of AI, interpretability, adoption in government, adoption within the Department of Defense, lots of things in this innovation pillar. What else stands out to you?
Chris:I think one of the first things that I noticed was So once again, on the surface, addressing dataset quality and stuff is great at that at that outermost layer of the onion. But as soon as you dig into it in the document and, you know, people listening to us may be on both sides of this politically, but they immediately go for a political objective directly under that which is basically kind of removal of of DEI concerns. And for those of you who may not be familiar with the DEI acronym, it's diversity, equity and inclusion. And that that is a a big focus of policy across all fields within the Trump administration. And so that had we not been inserting some of the politics into it, I would have been encouraged to see that under it, know, but it it that's with each of these things, you kinda have to inspect it and see how much politics is in it.
Chris:And not in in fairness, not every point that they're making under either this first pillar of Accelerate AI innovation or the subsequent pillars that we'll talk about include explicit politics. But I know in my first past, that was the first thing was trying to filter through some of that issues and actually get to the meat of it and then think about a many of the things that are suggested are already being done. So it's not it's not new. But for those things that are not being done, how do you get there? And what's the funding to make things happen that aren't already happening?
Sponsors:Well friends, when you're building and shipping AI products at scale, there's one constant, complexity. Yes. You're bringing the models, data pipelines, deployment infrastructure, and then someone says, let's turn this into a business. Cue the chaos. That's where Shopify steps in whether you're spinning up a storefront for your AI powered app or launching a brand around the tools you built.
Sponsors:Shopify is the commerce platform trusted by millions of businesses and 10% of all US ecommerce from names like Mattel, Gymshark, to founders just like you. With literally hundreds of ready to use templates, powerful built in marketing tools, and AI that writes product descriptions for you, headlines, even polishes your photography. Shopify doesn't just get you selling, it makes you look good doing it. And we love it. We use it here at Changelog.
Sponsors:Check us out merch.changelog.com. That's our storefront, and it handles the heavy lifting too. Payments, inventory, returns, shipping, even global logistics. It's like having an ops team built into your stack to help you sell. So if you're ready to sell, you are ready for Shopify.
Sponsors:Sign up now for your $1 per month trial and start selling today at shopify.com/practicalai. Again, that is shopify.com/practicalai.
Daniel:Well, Chris, just kind of tying up this first pillar of the AI action plan, accelerate AI innovation. I think a couple potential implications that people could have, we're obviously We can't read the whole thing here on podcast, but we've just hit a few of the highlights of this first pillar. But I think there are going to be these sorts of debates over what defines bias in an AI model. Because there's certainly, there is And again, I'm just kind of coming at it from a, I guess, technology standpoint. There could be bias as related to maybe DEI types of things in one way or another, for example, gender or whatever that might be.
Daniel:But there's also bias corresponding to very real world implications around technology and security. For example, the bias of one model to be more susceptible to prompt injections than another model. That is a bias, but it has nothing to do with these other categories. I think there is going to be this bias, in a sense, as a term of art in our world. In another sense, it's a politicized thing in our country.
Daniel:I think there's going to be a little bit of tension there. I think also with the kind of rollback of some of this regulation, it's going to put more pressure, I think, on businesses to strike the balance between innovation and the safeguards that they need to put in place. Because without explicitly being forced to, you're going to see companies, I think, at a very high profile level make critical mistakes in their AI applications and suffer some pretty hard brand consequences because of this. That in itself, just the commercial pressure, is going to force companies to think about their self regulation. Companies are going to have to figure out this balance for themselves, I think, as it's not flowing down guidance wise from standards bodies in the government.
Chris:Agreed.
Daniel:Yeah. Well, that gets to pillar one, accelerate AI adoption. Pillar two, build American AI infrastructure. Sounds exciting. What stood out for you here?
Chris:It does. One of the things, and I almost forgot to mention it in pillar one and it applies to pillar two is one of the the subtleties I'm noticing here when they're talking about building out AI for infrastructure is, you know, we there is already quite a bit of AI infrastructure in The US. We have a lot of major players with big investments in the commercial sector that all have government and military and intelligence links and things like that. And so this applies across the board. It applies to commercial.
Chris:It applies to to to government, etcetera. One of the things that really stood out to me in some of these suggestions was that there's a subtlety of picking winners and losers here. In most of these suggestions, there is there is already something in effect that does these different things. It may not be optimized, and that's that's open for debate. It may be a different organization from how they're envisioning it in the thing.
Chris:But there is a subtle picking of winners and losers through the entire document in terms of approach and who is in in sometimes explicitly who's responsible, and in some cases, who they're likely laying it with. And so that's one of the the is a is an overview that particularly applies to the infrastructure section pillar two would be that I can almost envision the lobbying from certain large companies that contributed to the infrastructure section as I was reading through the documents. I'll leave those companies unnamed, probably our listeners can can come to some of those conclusions themselves. So it was it definitely I I think that there's some interesting influences that are that are at play here behind the scenes in terms of the specific choices and verbiage being used.
Daniel:Yeah. And just to give people an idea of some of the things that are mentioned in the Build American AI Infrastructure pillar of the plan, there's sort of creating streamlined permitting for data centers, semiconductor manufacturing, etcetera, development of the grid, restoring semiconductor manufacturing, kind of onshoring that along with a number of security related things. So bolstering the cybersecurity of critical infrastructure, creating secure by design AI technologies and applications, a mature federal capacity for AI incident response. So very interesting. I find this interesting at the same time there's a rollback of regulation in the public sector around things like maybe what would have flowed into regulation from either the NIST AI risk management framework or previous executive orders or that sort of thing.
Daniel:There is this pressure from the government side where they're talking about making sure and trying to force somehow I don't know exactly how that's kind of vague in the document, But to ensure that AI systems that the government relies on, particularly for national security, are protected against spurious or malicious inputs, some of the language from the document, promoting resilient and secure AI development, having the DoD lead in collaboration with NIST, this sort of refining of generative AI frameworks, roadmaps, toolkits. It sounds like a lot of what has happened in the NIST AI risk management framework, but maybe led from a different direction. So there's this, I'm seeing two things at once, this sort of what was rolling more to the commercial side, but now is being geared more towards government implementation of AI, but also being led from a different direction, being more led from the DOD kind of national security perspective versus like a NIST?
Chris:Yeah, there is a there is a lot of kind of, you know, whether kind of government ish, you know, in all the things military and otherwise that are that are prevalent through it. And, know, once again, as we're looking through the section, the subtlety of the outcomes that are possible here are pretty big. And I'll give you one example of that. You know, they mentioned, even though the third pillar refers to AI diplomacy and security and the second pillar, there's a couple of points where they talk about semiconductor manufacturing, in terms of streamlining the permitting of that and to and there's one that's called restore American semiconductor manufacturing. And, you know, that that has profound implications on the commercial space and on only on our foreign policy, but on our national security that listeners of this who may not be very familiar with foreign policy and military concerns, You know, a big strategic reason why The United States is kind of the is kind of the the leader in defense of Taiwan concerns.
Chris:Taiwan's responsible for its own defense, but we have some obligations there that often come out in the news that people would have seen and a lot of that has to with the fact that the global semiconductor manufacturing industry is very well entrenched there and it's very complicated to replicate that elsewhere. That that is not a trivial thing to do. And so whether or not we are successful according to this document and doing what it's advocating, it would have implications either way in terms of things outside of AI altogether. And to your point, kinda leading into that, it definitely has a very, you know, different in, you know, shifting. There's some stuff that's assigned to NIST, but there's also stuff that might have traditionally been with NIST or other similar organizations that are often more commercially focused and moving those into into more of a a DOD sphere of influence as opposed to that.
Chris:So, yeah, you there's definitely a policy, an overall policy shift to be felt with the document going forward.
Daniel:Yeah. I'm kind of fascinated a little bit in this, AI infrastructure pillar, this idea. The last thing they mentioned is this sort of federal capacity for AI incident response. And this is, I don't know, I was just trying to parse through a little bit of this, like what is an AI incident? And some of this terminology I'm sure is intentionally vague in certain places, but at least in terms of this AI incident, they say prudent planning is required to ensure that if systems fail, the impacts to critical services or infrastructure are minimized and response is imminent.
Daniel:And they wanna promote this kind of incorporation of AI incident response actions into incident response doctrine and best practices. So actually this was another kind of point, other than the fact that I was a little unclear what they mean by AI incident response. I did like that this was another element in this pillar that I kind of liked from a certain perspective in the sense that I really believe that there's a lot of people developing AI applications and automations and that sort of thing without a lot of best practice cybersecurity knowledge. Then over here, there's the cybersecurity folks who aren't sleeping at night because they know all this is going on, but it's sort of whatever it is. The executives or the board are pushing this AI stuff forward.
Daniel:They don't want to bring the cybersecurity folks in because that's going to slow things down and the naysayers or whatever. But I do really think that there's a benefit to have this kind of more cross functional collaboration between cybersecurity and AI development and actually bake in some of the known AI threats into cybersecurity playbooks and some of the cybersecurity best practices into AI application development. I think if you look back, you could make the parallel that when data science was big and hyped, there were data scientists building things that DevOps and infrastructure people couldn't support and there wasn't a lot of cross functional collaboration on those things. We kind of worked through a lot of that. I think there's a similar thing happening right now in terms of AI development and the cybersecurity industry.
Daniel:I do like that kind of crossover in a lot of ways that they talk about under this AI incident response piece.
Chris:It is. And that while we're talking about that particular point, you know, and we as we were discussing kind of the the power shift that that you're seeing, I I note that in their third recommendation policy note, but they shift who is in charge of that by going to they say led by DOD, is the Department of Defense, DHS, which is Department of Homeland Security, and the ODNI which is the Office of the Director of National Intelligence in coordination with OSTP which is the Office of Science and Technology Policy, NSC which is the National Security Council, OMB, which is the Office of Management Budget and the Office of National Cyber Director. And so that's the they're they're clearly putting defense and intelligence at the front end of that and kind of and kind of backloading it with more technology orientation on that. And that's very different from how we've seen it in the past. They have essentially flip flopped how that has traditionally been addressed to your point, which which there's potentially good and bad to that approach.
Chris:There it will definitely create a different set of policies when you have a different set of of lead agencies with their priorities addressing that. And so how that how that ends up rolling out in terms of the discrete, you know, future executive orders and policy statements that come out and how that funding is allocated will be interesting to see in the years ahead. I will also say it will be interesting to see how this rolls out into a next administration down the road, whether that administration be democratic, a democrat or republican administration. So
Daniel:Alright, Chris. We are on to the third pillar. The third pillar is lead in international AI diplomacy and security. That is the third pillar of the America's AI action plan from the White House. So lead in international AI diplomacy and security.
Daniel:There are a few things under this pillar here. There's discussion of exporting American AI to allies and partners. There's an explicit call out of countering Chinese influence and strengthening AI compute export control enforcement, which I think maybe is geared towards some of that Yeah. How does NVIDIA stuff get over to Indeed. That's right.
Daniel:Yeah. Actually, there's two. So there's strengthen AI compute export control enforcement, which was what I was talking about. Then there's another point, which it's a double click, plug loopholes in existing semiconductor manufacturing export controls and align protection measures globally, ensure that the US government is at the forefront, invest in biosecurity, which maybe we can talk about that one here in a sec, why might that be interesting? But yeah, any sort of first reactions to this set of things?
Chris:Well, have certainly seen current domestic politics in the previous pillars, this is, without a doubt, the most explicitly political pillar, with turning to foreign policy, that we have seen in the document. And pretty much as you read through each of the various points it makes and the underlying suggested, recommendations, you you definitely find it reflecting, policy. So, you know, once again, this this is very much a political and very much a a turning toward national security concerns in this. It really doubles down on current administration policies toward China. And, you know, and as, know, in that world, China is considered to be a peer or near peer adversary and different administrations have addressed that with different sets of policies.
Chris:And and right now, it's the current administration policy is fairly aggressive in that way. And and we're seeing that if you look at the news right now. You mentioned the NVIDIA, the concern in terms of export control. And China recently, in the last few days, has instructed its AI companies, know, I paraphrase, thou shalt seek GPU technologies domestically rather than going to The United States. So we're seeing it turning around like that.
Chris:So as they tighten controls, whether and you can make pros and cons in terms of arguments there. We're certainly seeing China turn around with its own set of policies and approaches on that. So only history will tell us whether or not either side is making the best policy choices currently. But, yeah, it this document definitely doubles down on on current American foreign policy approach.
Daniel:It's interesting. I don't know if when we would have started this podcast, if I would have thought that GPUs would be kind of this thing that would be used as a kind of export and support for political allies, which is definitely kind of what it Even before this AI action plan, right? This is sort of, in a sense, a kind of new arms race. Absolutely. Kind of have, in a sense, whole market, almost like an arms market, but with semiconductors and GPU cards, which is very interesting.
Daniel:Again, something we probably couldn't have expected some number of years ago, but is kind of a reality.
Chris:I mean, it it and I've heard many people, you know, from across many companies and across many industries talk about this as kind of a a sort of like a cold war esque approach, you know, where in the modern age, it's all around AI and the technologies that are associated with and support that capability. Regardless of of which side of the aisle someone is on, AI is a is is depending on who you're talking to often referred to as the most important concern within the American military establishment. And that's not surprising. We're seeing that across others as well. So it really comes down to, you know, how how you're going to affect your national intent, if you will, on, on, on addressing that.
Chris:But yes, GPUs models, all these different concerns have become pawns us, you know, kind of global economic and military pawns for national policy, or international policy around the world now.
Daniel:Yeah. Yeah. It's it's interesting. And, yeah, I know, I'm in Lafayette, Indiana. We've got a big semiconductor plant, SK Hynix, I believe it is, coming in north of town.
Daniel:And, certainly you've seen, a lot of what Intel has done over the past years with plants out in Ohio. It will be interesting to see, but that's all yeah. It will be interesting to see because as you mentioned, this sort of onshoring of semiconductor manufacturing is really a complicated problem to solve. I can't speak to it with all expertise, but any supply chain, especially at that level, extremely complicated. And so just kind of this idea of onshoring is definitely difficult.
Daniel:I think that's partially also why they mention working with allies and that sort of thing, because some sort of solution to onshoring and protecting the kind of supply chain of GPUs, for example, is going to necessarily involve multiple nations. And so, yeah, it'll be interesting to see how that plays out. I do see there's kind of certain themes as well that are kind of cross cutting across a lot of the pillars. One maybe to call out, which is not really you've mentioned a couple of these that are maybe more politicized. One cross cutting thing that's interesting that they talk about is AI improving the lives of Americans by complementing their work, not replacing it, which again is vague but is certainly a theme that we have promoted on this show.
Daniel:So that's kind of encouraging. How it's interpreted might be everyone might interpret that different and how compliment what does it mean to compliment? But that did seem I I was just gonna call out that as one kind of cross cutting thing.
Chris:There's an irony there in that, you know, what we're actually seeing in the job market right now is it's really tough to come into the job market, especially if you are a junior worker in the space. And so because, know, the models that are out there, both Gen AI and others are in a lot of cases replacing junior positions altogether and not being backfilled on that. And that's made the job market really tough for junior level people in particular, but but that extends across the entire space. And so if you're going to, know, there's you're there's a little bit of, of which way you're gonna go. I mean, you're, you know, if a company, out there who's just focused on the bottom line can avoid paying a bunch of compensation by putting some models in place.
Chris:We are seeing that that is happening in real life. And yet, if you're going to say we don't want AI to replace workers, we want it to supplement them, then you're going to have to provide some form of incentive to make that happen. And that incentive could take a number of different forms, but likely there would be some sort of regulation in terms of what can occur in that. And yet, this administration is very focused on rolling back regulation for AI adoption. So there's a place there where you have to find some sort of balance and that isn't even recognized in this document.
Chris:Each of the points is standing alone. They're not recognizing that there's conflict within the document and definitely not addressing how they might approach that conflict to yield real life outcomes that are better across the board.
Daniel:Yeah. Well, I think kind of on that note, just getting kind of to the end here and summarizing, maybe just there's some questions that I think anyone that sees this document is left to wrestle with that we're not able to completely answer here. But I think for listeners, these are really valid questions that I think you can be thinking about in your own context. And the first of those I would highlight is just how are you going to balance innovation with the safety element in your context, in your organization? Because ultimately that kind of is being, even right now, it's driven by your own choices to wrestle with that and the implications of that not coming down from regulation.
Daniel:It seems like that won't be coming. So that is something that you're going to have to continue to balance and think about is that kind of innovation and safety element. I think other pieces of this are obviously related to kind of AI dominance and geopolitical risk and all of those things. You might enjoy thinking about those. But the major thing that I'm left with here is it's on us practitioners and leaders in companies and organizations to really think about this balance of innovation and safety.
Daniel:We can lead out with that in good ways that are consistent with this AI action plan and will carry through with benefit, whether it's this administration or another administration, in terms of best practices. So yeah, that's my main thought leaving this. Any closing thoughts from your end, Chris?
Chris:I subscribe to what you just said. I think you I think that's very well said. That represents where I come from as well. I'm hoping that that over time, regardless of the specific policies of a given administration, that maybe we can arrive at some policies that are maybe a little bit less political and and kind of something that everybody on both sides of the aisle could feel good about. I think it may take a little time to get there, but I I would be remiss if I didn't express that hope for for eventuality.
Chris:So, yes, we'll see what comes next, and we'll see how this all plays out in real life with real budgets and real policies to come.
Daniel:Well said, Chris. And and quoting from America's AI action plan to leave our listeners, simply put, we need to build, baby, build. So, you know, practitioners out there, we've got we've got some work to do. Thanks for thanks for chatting today, Chris. It's been fun.
Daniel:Good. Yep. Absolutely. Thanks a lot, Daniel.
Jerod:All right. That's our show for this week. If you haven't checked out our website, head to practicalai.fm, and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show.
Jerod:Check them out at predictionguard dot com. Also, thanks to Breakmaster Cylinder for the beats and to you for listening. That's all for now, but you'll hear from us again next week.
Creators and Guests
