Confident, strategic AI leadership

Jerod:

Welcome to the Practical AI podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.

Jerod:

Now, onto the show.

Daniel:

Welcome to another episode of the Practical AI podcast. This is Daniel Wightnack. I am CEO at Prediction Guard, I'm joined as always by my cohost, Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are doing, Chris?

Chris:

I'm doing very well today, Daniel. How's it going?

Daniel:

Yeah. No complaints. It's been an interesting week of AI things with, I guess, OpenAI being open again. I'm sure we'll talk about that on this show at some point. But may maybe there's a maybe there's a connection there to responsible practices.

Daniel:

But I'm I'm really excited, today because I've seen a lot of, things popping up on our on our friend's channel, Dimitrios over at MLOps community, from Allegra Guinan, who is cofounder and CTO at Lumira. Welcome, Allegra. Great to have you here.

Allegra:

Thank you so much. Great to be here.

Daniel:

Yeah. Well, I kind of alluded to responsible AI things, which is certainly something I'm sure we'll get in. But it may be useful just to hear a little bit of your background and kind of how you got into what you're doing now, which I understand is advising business leaders around AI principles and responsible AI practices and strategies and that sort of thing. Correct me if I'm wrong, but yeah, would love to hear a little bit of that background and kind of how you arrived to doing what you're doing right now.

Allegra:

Yeah, for sure. I have sort of an unconventional background and path towards CTO and co founder. It's definitely not something I ever thought I would land at, but happy I did. I got into the tech scene over ten years ago now. I was living in San Francisco.

Allegra:

I'm actually based in Lisbon now, similar cities in some ways, but I was in the heart of the tech scene by default. I had actually studied in the arts and it was a different path than I expected. I started working at a startup that was doing three d visualization that was quite ahead of its time. I was working on the data platform team there, so I worked a lot with backend engineers and figuring out what our data architecture would look like essentially for this three d visualization e commerce that was working in the interior design space. That was obviously very new to me as my first tech job, and I had started in more of an ops role and then moved quickly into product management.

Allegra:

I was building out internal tools, which is why I was working so closely with those teams and focusing then a lot on search optimization, figuring out which words would yield which results, couch, sofa, you get the same sort of thing in the end. Seems very basic now especially, but back then it was a lot of work. I got really excited working with data teams and it kicked off this interest and passion in a way for technical projects. Then I moved over into the FinTech space. I was working at Chime, which maybe you've seen them in the news recently.

Allegra:

I was also working on the data team there and really saw at scale. So I was a part of the technical program management team. So I was managing technical portfolios with a lot of different programs, usually longer scale and multiple year data programs, and figuring out how to get real time data to those who needed it within the organization. That often meant ML engineers. That was my first touch point in the ML space, working a lot on fraud and security within the finance space, which was also extremely interesting.

Allegra:

I got further into this data engineering world and I built up this dev advocacy within me, and I really loved working with those teams. And so then I shifted again to Cloudflare, so I was working as a technical program manager there. It was a multinational or is a multinational organization, of course, so a bit bigger scale again than I was working on previously. I got to then get deeper into a lot of AI initiatives across the enterprise. That was my path.

Allegra:

I met my co founder at Lumira, Emma, who's the CEO, around four years ago on a beach in Lisbon. And we didn't realize we had this similar interest in the AI space and in technology. And then many years after we already became friends, we realized that. She had already had an organization around consultancy and business strategy. We came together and formed what is now Lumira about a year and a half ago.

Allegra:

Really, it has grown over time, especially as the space is shifting so much with technology and AI. But as you mentioned, we are really trying to guide leaders in making more responsible decisions around this new era of technology, which is how we got the name, Loomy for Light, Era for Future, so a brighter future. It's something that I'm really excited to be a part of and to be leading in. And I'm glad that my voice is sort of starting to come across the ecosystem in various places.

Daniel:

That's great. When you were talking, I was thinking back to a conversation Chris and I had, I don't know if it was a couple episodes ago, but we were talking about the services industry kind of more broadly. And it it does seem like there's such a need for good, you know, responsible strategy and insight from the services standpoint around AI. But also there's kind of, like, on the other end of that AI eating up some of the maybe services industry around certainly around whether it's marketing agencies or maybe prototyping new projects and that sort of thing. So what is it like, I guess, being part of that services industry in this age of AI?

Daniel:

And how do you see kind of the role of advisors, service providers either shifting or I don't know. What are your thoughts around that in terms of the best value that even services companies can provide during this time where maybe certain areas of what they provided before is getting gobbled up by some of the things that AI is providing?

Allegra:

Yeah, there's a lot in there. So I'll start with sort of the challenge of being in this space, and that is that we're not offering a silver bullet. We're not offering a tech product that will solve all of your problems. We are not just handing this over to you as an answer. We're really developing leadership.

Allegra:

So our core product is a executive education program that spans eight weeks. And we are covering all of the challenges that we're seeing right now in AI that are not necessarily technical. It's a very human centered approach to technology. Of course, it's easier and people want something that's fast and they just want the answers. What should I build?

Allegra:

What should I buy? How do I do this? And then can I pay you to do it for me? We're really offering a counter narrative to that and instead asking you as a leader, you have already built yourself up, you're leading this organization, however big it may be. What can you do now to make the right choices instead of offloading that onto somebody else?

Allegra:

It's really testing folks and it's putting them up against a wall sometimes to be a better leader. That's a personal choice. Not everybody wants to invest time in being the best version of themselves, or maybe they're tapped out and they don't want to do this anymore. That could also be the case. But we're working with the ones that are trying to position themselves as really leaders in this age that we're in and to continue to lead and to bring their entire organization around.

Allegra:

Because I'm sure you're seeing there are a lot of failed projects, a lot of missed returns, expectations that were not met with AI in the past couple of years. Most of that is a human issue or a leadership issue. It's not a technical issue. The tech is there. We are missing this communication and translation layer, And that falls on leadership in our minds, which is what we're trying address.

Chris:

It's really refreshing to hear that you're trying to change the narrative. Because I would imagine, I know that Daniel and I are constantly bombarded you know, with different companies that are out there telling us that, know, they have the solution, it's AI, and it will solve everything having to do with that. And that's so common. And so kind of hearing it being grounded in leadership, I guess, kind of going toward a question here, I would imagine it is very hard for leaders or aspiring leaders to to kind of process the never ending rapid change that's occurring with this space. Because, you know, over the over time, having seen decades of business and stuff, this is much faster cadence than it has ever been.

Chris:

We've seen change over those times, but it's but now, you know, literally, you know, every week, there's a collection of new things to consider that are hitting you up, know, and whatever you're at. So how do you how do you deal with that, I would imagine that they're that your leaders are coming in with some level of anxiety, and some level of uncertainty of like, you know, and maybe even some fear of making decisions that are going to bite them on the rear end by, you know, not not very far down the road by making choices. Now, how do you get through things like that? How do you get through that kind of fear and anxiety?

Allegra:

Yeah, I mean, you called out some of the main challenges that we're seeing. And in through many conversations with many leaders across the board, we're hearing the same thing. So one is this noise exhaustion and information overload of trying to keep up. Another is fear of missing out and getting left behind. So you either end up in this position where you're sort of paralyzed and you're not sure what move to take.

Allegra:

And so you're getting left behind in a sense, or you fear that you are, or you're moving really quickly, you're making a lot of decisions, but they're not necessarily the right ones. They're not grounded in anything. It's just based off of this reaction to what you're seeing around you. That could be a very narrow echo chamber of information that you're being exposed to. Maybe you only check Twitter for your updates, or you only check one sort of newsletter to get your information and you're not creating this landscape of multiple sources of information to decide which move to make.

Allegra:

So yes, there is this stress, anxiety, and exhaustion that we're seeing, which is also what we're trying to address. We do that by not focusing on every latest model, what the latest type is. We're focusing on what your challenges are as a leader in your organization. And that is not gonna change every single week, in theory, maybe it is for some folks. But if you're a mature organization and you have strategic goals, probably you know what they are or you should know.

Allegra:

And then you can start to address what the technology is that would help you achieve what your goals are related those challenges. And so it doesn't matter if there are 10 new models that came out last week. You don't need to know what all of them are right now as a senior leader or as an executive. You need to understand what numbers you're trying to shift, what kind of transformation you're trying to move forward within your organization, within your workforce, and then you can find iteratively the best solution technically for that. We're trying to help people build that mindset and that scaffolding to understand the ecosystem.

Allegra:

We do have a section in our program. We have it split up into three foundations. The first is on confidence, which I can go back to in a second, but the second is around action. That's understanding risk and it's understanding the industry and industry radar. It's about setting your vision for AI, not for your organization, but as yourself, what is your personal stance as a leader?

Allegra:

What do you care about? Is it security, privacy, transparency? What are those principles that really resonate with you that you can then use to make your decisions? Once you have an understanding of how to evaluate risk, once you understand what's out there in a general sense of capabilities, not every single minute detail, but a general understanding, then you can start to think about what opportunities you have in front of you as far as use cases. We really do focus on that rather than trying to put a lot more information in terms of technicalities in front of you.

Allegra:

Then just going back to the confidence portion, so we have that as our first foundation because we want people to develop this mindset as leaders of, Okay, I already have a lot of strengths to move forward with. I've already built myself up and my organization up. Understanding and knowing every new model drop is not going to be the differentiator here. It's how I communicate with my workforce, how I keep people engaged, how I can manage everything that's transforming in front of us and keep people excited to be here and to be a part of what we're building, whatever that is. You need to have that resilience as a leader and that confidence in yourself and to be informed before you can start taking action, before it even makes sense to start reading all of the latest news, because it won't mean anything to you unless you have that personal understanding.

Sponsors:

Well, friends, when you're building and shipping AI products at scale, there's one constant, complexity. Yes. You're wrangling models, data pipelines, deployment infrastructure, and then someone says, let's turn this into a business. Cue the chaos. That's where Shopify steps in whether you're spinning up a storefront for your AI powered app or launching a brand around the tools you built.

Sponsors:

Shopify is the commerce platform trusted by millions of businesses and 10% of all US ecommerce from names like Mattel, Gymshark to founders just like you. With literally hundreds of ready to use templates, powerful built in marketing tools, and AI that writes product descriptions for you, headlines, even polishes your product photography. Shopify doesn't just get you selling, it makes you look good doing it. And we love it. We use it here at Changelog.

Sponsors:

Check us out merch.changelog.com. That's our storefront, and it handles the heavy lifting too. Payments, inventory, returns, shipping, even global logistics. It's like having an ops team built into your stack to help you sell. So if you're ready to sell, you are ready for Shopify.

Sponsors:

Sign up now for your $1 per month trial and start selling today at shopify.com/practicalai. Again, that is shopify.com/practicalai.

Daniel:

Yeah. Allegra, it's really encouraging to hear your perspective. I can I can, second Chris there? I Of course, when we're working with customers, when we're talking to people on the podcast, when we interact in our companies, the time hearing like, Oh, this new model came out and now OpenAI has open models and I need to should I switch? And yeah, I think having this sort of internal piece that even if no one ever released another model, you have more than enough to be very transformative in your organization.

Daniel:

Don't worry about it. There's a long way to go there. Yeah. So I think that that's really interesting. I love also the perspective on leadership because one of the other things that I think we're seeing a little bit, and I would love to get your perspective on this, is kind of the executives in a company kind of dictating, like, we are now going to transform with AI, right?

Daniel:

And everyone in the company really not understanding what that means practically. Or leaders like, Oh, I'm a manager in an engineering team and I want all of my developers to be more efficient. So I dictate to them, All of you need to be using these AI tools. And really, no one ends up using them. Everybody kind of has the workflows that they're used to.

Daniel:

And so there's really not that kind of trickle down transformation that happens. Wondering about your perspective on that. Maybe even for me as a leader of a team in my company, I really want to understand that element better because I want to both lead by example, but also understand, to your point, how to lead well my team forward in a way that is embracing the right AI technology and being transformed. But I know that I also can't just walk in one day and be like, Everybody use more AI, and then I go sit at my desk. Even whether I'm using more AI or not, it really doesn't matter.

Allegra:

Yeah. I mean, this is one of the critical mistakes that we're seeing now. There have been some recent news stories coming out of leaders having to roll back their AI first organizational approach because they weren't expecting the backlash that they got. They assumed everybody was thinking about AI the same way as they were, which is not the case. Everybody's coming to this from a different level of literacy, from a different perspective.

Allegra:

Everybody has a past relationship with how they view this technology as it relates to complexity, if it's actually more useful or not. You can give people any tool you want. You can give them a stipend, like free money, go try whatever you want. But unless you help them understand why or what it would help them solve and really bring up that level of AI literacy across your organization, it won't make a difference because people won't understand what you're trying to do here. Unless you also communicate that clearly as a leader, you're going to come across as very prescriptive.

Allegra:

I think, especially in the engineering space, we all know that that's not ideal. We don't like when people are super prescriptive and just tell us what to do. We like to explore and to do research and to get there on our own. What's interesting about AI is that this is really coming from the bottoms up in a lot of ways. Three times more employees within organizations are using AI than their leaders think.

Allegra:

That was from a recent report this year. So it's not that people are not ready or can be engaged, but it's meeting them where they are and having a conversation that's very honest. So what are using AI for? It shouldn't be stigmatized. If you want to encourage usage, then help people understand that it's okay to share where they're using AI and why they chose to do it that way.

Allegra:

Have open sessions where you're sharing with one another. Establish this AI champion culture and a fail forward culture as well. You have to invest time in experimentation and research and know that it's not all going to be perfect and to make people feel like that's okay and that they can try these things and share openly. Because if you don't do that, then it doesn't work. People won't use it if they're not part of it and if they're not involved.

Allegra:

They already are using it. That's the thing. Most people are using something to the side of their work, whether or not you put it in front of them purposefully or not. Using ChatGPT or Claude, or they're coding with something on the side, they have an AI driven IDE. Something is happening in the organization, whether or not you built a program or initiative around it.

Allegra:

So it's better to do that in a very open way and an honest way where everybody is involved. One of my favorite things that I worked on at Cloudflare was piloting different assistants, AI coding assistants. It was a large group of engineers from various teams and a lot of it was qualitative. This was my approach coming in, and I don't know if they liked it. I think they did because the results were good in the end, but it's a lot of just understanding what people like and having a lot of channels of communication for, Did you try this thing out?

Allegra:

How did it go? We're going give you a fully supported space and time to invest in trying this. Then we're going do it with something else. We're going to compare them very honestly because we're not just going to choose a solution that seems the best on the market right now. We're going choose the best solution for you, for this specific group of engineers that are part of this organization.

Allegra:

I think being quite humble as a leader in that sense too, that you don't know the best thing for everybody, you're at the top, you don't have your hands in every single initiative, you shouldn't anyways, that's my point of view. You have to trust that the people that you hired have an opinion that's worth hearing and then give them space to share that.

Chris:

You mentioned something just now that I was really wanting to dive into. You mentioned the word trust coming in there, and that's complicated. And that there's trust in multiple directions. There's there's not only the the trust that the leader or leaders must have in the teams that they are overseeing, but there's also the trust of those being oversaw overseen that, you know, that are doing the work, the engineers, and trusting in the motives of their leaders. And that that raises some some interesting things, several of which you've you've mentioned.

Chris:

You know, there's there's the as you pointed out, there's this reality that employees are using AI in in areas that maybe they have even been told explicitly not to or or or at least they're they're finding a place to kind of to kinda bring it in whether it's noticed or not. And you also have these top down, you know, thou shalt go forward and use AI, with employees worried about, you know, what does this mean for my job, you know, job security, is this AI eventually going to replace me? How there's so much involved in this and probably more so than I've observed in the past when we've had, you know, cloud computing was a pre, you know, before AI, you know, the AI way, we had the cloud computing wave. And before that, we've had other waves. And there wasn't there, there was a trust in the technology and privacy and stuff.

Chris:

But there's now an implicit trust within your own organization that exists, you know, those factors. How do you address that with, you know, with with leaders as you're getting into and, get them if they don't recognize that upfront? How do you get them to recognize and take action on that kind of new dynamic that's now in the workplace?

Allegra:

Yeah, trust is so critical here at every level, at the technical level, at the human level across the board. I think the way people notice this, unfortunately, is when things don't pan out or there's some sort of internal rebellion as we're seeing with this backlash that I mentioned, or they're not seeing, again, the returns that they expected because people didn't adopt in the way that they anticipated. It's because there wasn't that relationship building and that trust because the people that they're trying to involve, the workforce were not a part of those decisions. If you have a vision as a leader and you want to be AI first across everything, you're not communicating that, you didn't set any standards, you didn't publish any policies that help people understand what's okay and what's not okay, then that doesn't elicit trust in the environment. Again, it just comes back to something that feels very top down without involvement, which won't lead to any results.

Allegra:

Then there's trust in what you're actually building. This is something that I also try to advocate a lot for because right now engineers are the ones really pushing this forward in a lot of organizations. There are groups that are just trying things out. Again, whether or not it was dictated that it should be done, that's just sort of what's happening. And so what's being built might not necessarily have trust built in by default because that wasn't something that was thought of at first.

Allegra:

Maybe you're just trying to build something cool, you got access to something, a new model came out and you just want to throw something together. That can sometimes escalate to being now used by the company or some leader wants to see that in production, even though it wasn't really tested thoroughly. How can you expect somebody internally, if you're building for, let's say, another internal user to trust what you've built if you don't communicate why it was done or how, and you can't really explain where the outputs are coming from and there's no documentation around it. We've abandoned the product approach and thinking and anything around documentation or thorough testing when it comes to AI, it's just throwing things out there and some things go into production and are used and sometimes they work, but a lot of times they don't. And so it's hard to build trust when you're moving that way without a lot of intention and without a lot of clarity that you can express to other people.

Allegra:

Something that we see failing a lot, even when something works really well technically and it's perfectly executed, if you can explain that to maybe somebody in risk or compliance, it's not going to get very far and they won't be able to roll that out and it won't be trusted, even if you, an individual that built it, feels like it's good. So again, it's about this transparency and open communication as you're going and why you can't really abandon documentation and you can't abandon the reasons that you built things or having observability or logging. It's not enough to just make something that seems really cool. You have to actually back it up and to be able to explain it to the others around you.

Daniel:

Yeah, some of what you said there is definitely applicable internally externally, but certainly a lot internally in terms of the documentation, how reliable something is, the testing, all of that sort of thing. Part of what I was thinking in my mind while you were talking is internally here, we like to talk about certain ways in which we would like to build things. And one of those things that we talk about is we would like to build things that kind of restore trust in human institutions rather than further erode that via AI and automation. And I'm wondering from an external standpoint, I mean, side of this is internal, how you kind of integrate AI features, test them, deploy them, etcetera. It kind of gets to another level when, let's say, you're releasing your voice assistant publicly to the world, or you're rolling this out to your external customers and you say, Hey, this is our new AI feature.

Daniel:

And that could go a lot of different ways, some of which, like I say, could erode trust further with your customers. Hopefully it's not already low, but maybe it could erode some of that trust that you've built up over time, but maybe doesn't have to be that way. What have you found to be some of those kind of key principles that leaders could keep in mind as especially as they're releasing things to their users or their customers or to the public that can help the public understand or their customers understand that this has trust built in, I think is how you phrased it.

Allegra:

Yeah. What's interesting is that the gap and disparity between what's going on internally with AI and what's going on externally is so wide. The experiences are so different from what people are building for their own teams and then what they end up putting in production. Maybe we can come back to that, but it's just something I see very obviously in the space. But I think one thing that's super important here is the user research.

Allegra:

Right now, everybody is putting AI into their products everywhere. If you have a bunch of vendors that you work with as an enterprise, you'll see now that all of them are offering AI and they're all offering the same AI features. Maybe you asked for them and maybe you didn't. And so I think understanding still your user base, they might not need something to change or the thing that they do want to change, you might not need to use AI for it in the way that you think. So really asking and understanding your users before you start deploying that kind of experience.

Allegra:

Because then if they didn't ask for it and they didn't actually need it, then why would they trust it and why would they start to be happy that it's out there? Unless it makes the experience so much better, but a lot of times it doesn't because this is still quite nascent for a lot of organizations. So that's one thing. And then the other again is the transparency. As a user, for example, you can go into financial services.

Allegra:

I think that's a really important industry and we think about the financial services a lot in terms of risk. But let's say that I am now a user of some sort of FinTech app or something around my finances and you've put AI in there and I see something in front of me that I don't understand, and I ask, How did you get to that decision? Or even if I'm very technical, What model did you use to get here? And you don't have an answer for me, that will erode trust. That's something you need to think about, especially as people are becoming more literate, but also sometimes at a shallow level.

Allegra:

They can ask a question. They might not fully understand what they're asking, but they might ask you something. And you need to be able to respond to that. Again, the documentation, do you have system cards in place? Do you understand what your guardrails are?

Allegra:

Are they documented somewhere? Do you have tracking? Do you have system prompt versioning? Can you actually back up what you've done so that when somebody does ask you a question and they're looking for that confidence in you, they're looking for you to bring back the trust, and it's an opportunity for you. If you don't have an answer in that moment, you will erode trust with your external users.

Allegra:

So I would say for leaders thinking about that, to be able to ask those questions first and make sure that they have everything in order before they start deploying. And then when you do have something that is driven by AI, being explicit about it. If people are uncomfortable, maybe it's because they don't know enough and it's your job, your responsibility as a leader in that space to maybe educate them within your product and the experience around what you're offering. So if they're off put by it, understanding why, understanding your users, that has not changed. But again, somehow it's gotten lost, where suddenly we don't really care what users ask for or what their feedback is.

Allegra:

I think we really need to go back to that.

Chris:

So I'm wondering, as we've been talking a lot about adoption and the trust issues that go with that, one of the things that I've been thinking about and curious on how you're approaching it is how you position the notion of responsible AI to different organizations that you're working with, as you're going through this educational process. And it's like there because there's not a single golden path, you know, down that down that road. There are there's a number of organizations that have weighed in with different types of policy guidance and such on that. Everything from government multiple government organizations to non government organizations and and nonprofits. And so as a company is looking to try to make their AI strategy work, and they're starting to address these various things we've been talking about, how do you guide them into framing that whole effort?

Chris:

That they you know, because it's a little bit different. There's not just a go do this and they're done. How do you approach that?

Allegra:

Yeah. I mean, first, there's no standardized definition of responsible AI. There are set of principles that a lot of people agree on and they're overlapped when you see some of these policies that have been put out, but there isn't something single across the industry that everybody has aligned on. That's one thing. The second is that a lot of leaders don't care about being responsible.

Allegra:

That's just the honest truth. They care about their bottom line and they care about financials and they want to see numbers move and that's it. And so a lot of the framing does come back to that. And luckily, being more transparent, being more accountable, having robust systems, all of these lead to better results and more money at the end and better trust with your customers. So luckily it lines up that way, but it is a different story to tell.

Allegra:

I think leaders need to start understanding that having these answers to your customers when they come asking or being compliant and not facing major fees when you are not compliant, especially here in the EU, for example, And being able to lay it out in a very financial way that makes sense is where to go in here. And then again, we're of course trying to shift the mindset of leaders to understand what their own principles are and what they actually care about. And a lot of times, organizations already have these. They might already be security first. They might already care about privacy.

Allegra:

So you can use that lens. So if you already care about privacy, are you thinking about access management and secured access when you're building your AI systems? A lot of people are not right now, that's a gap that we're seeing where they have all of these governance structures in place, but then they built an AI system that completely erodes all of that and finds its way around, and they didn't think about that before. And so you can use their own values and their own framing of how they're running their business and tie it back to how to be more responsible in practice. And we are seeing that this is changing a bit in terms of how companies are presenting themselves.

Allegra:

Going back to financial services, for example, I was looking through the top banks that are leading in the AI space right now. What we've seen mostly in the last couple of years is a shift in what they're presenting externally in terms of explainability and their responsible practices and their leadership. They have a lot more people talking externally about how they're handling AI, sort of getting more insights into how they're approaching. And so I do think that the tides are shifting because people are realizing that you do need to at least come off as if you care about it a bit, and maybe along the way you will actually start to care and make some differences. So that's how I think about that.

Allegra:

But for myself, I also approach this from the engineering side. So because of my background and because I've built my entire career alongside engineers, a lot of what you want to do as a good engineer to build good products aligns with these responsible practices as well. When you have testing in place, when you do have security in place and observability and you understand what you've built and you do, again, have these options for versioning and you have your MLOps figured out, you will have a better outcome when you're building an AI system. And all of those things also benefit on the responsible side that then you can have other teams looking into to understand how you got to that point. And again, you're bringing in the multidisciplinary trust that is so necessary for this.

Daniel:

As you're discussing things with leaders around responsible practices, you know, how they should lead out with AI strategy, that sort of thing. One, one of the questions that's come up in my mind as, as you've been talking is, is the appropriate level of literacy around these subjects on the technical side that a leader does want to have? Because I get into so many discussions, because we're kind of intersecting, my company intersecting with the world of security talking to CSOs or CIOs or whatever. And they'll say things like, Yeah, we have our own model. It's running internally.

Daniel:

None of our data leaks. And you sort of probe into that a little bit and you're like, No, actually what you just have is an API key to a model endpoint that's not running in your infrastructure and all of your data is living at rest in someone else's infrastructure. There's just like such a wide gap between what they apparently think they have and what they actually have. And I understand us as AI people have probably not helped that because we've sort of obfuscated some of that terminology and made things maybe seem like they kind of are what they aren't. And so I kind of feel sympathy for a lot of people that we have made it extra hard for people to gain this literacy maybe.

Daniel:

But, around things like, model hosting, open, closed, fine tuning, like all of these things are very confusing for people. What is your recommendation as you're going through this material with leaders around the appropriate If there's leaders listening in our audience, where should they be expected to get to technical literacy wise to be an effective leader around AI things?

Allegra:

Yeah. This is definitely a challenge even for me because I'm in this every single day and I listen to these terms all day long and I enjoy hearing about them. Yes. I'm listening to these kinds of podcasts. So to distill down what is actually critical for people to understand is even hard for me because I'm like, Oh, I want you to know everything that I know.

Allegra:

That's obviously possible. Think an easy way to approach this is what you just called out, as well as what's happening within the walls of your organization. That's a really good place to start. If you do have a system that is You have an API call to some model, open or close or whatever, who built that? Do you understand what's going on in your own infrastructure?

Allegra:

Who is designing this? Do you have an architect? Do you have a technical leader? Do you have a lead engineer? Do have a CTO?

Allegra:

Somebody should be responsible for understanding what has been built and why. If you don't understand that and you don't have a person and you don't have a relationship with that person, that's your first problem. So I think starting there and knowing, Okay, what do we actually have going on here that's in production live right now? Let's walk through what those terms are so I can understand now how to navigate this space right now in front of me, rather than every single potential capability out there that other organizations are using, because that might not be super helpful for you right now. So I would start with that.

Allegra:

Another one is abstracting it and focusing more on what you're trying to solve. Again, like what we talked about. So it doesn't really matter if a 100 new things come out, as you mentioned, you probably already have the capabilities out there to solve what you want to. So asking good questions and understanding like, Okay, I care about security. That means I don't want this to happen.

Allegra:

Is that happening? When I make this kind of call, is the data leaving? You don't need to understand everything in that moment, but you do need to understand what kinds of questions to ask, which is also why the principles are so helpful. Because if you care about things like transparency or you care about supporting open source, we can go into that as well. Like you mentioned, then you can focus on a specific set of terms or concepts to really understand.

Allegra:

But even before that, people don't understand what AI is. They don't understand what traditional ML is. A lot of organizations are still running legacy ML systems and modern AI, and then they started adding Gen AI and they don't have an understanding of what those concepts even are. AgenTic is thrown around a lot and that definition varies a lot too. So I think there are some that it's like, if you are hearing it out there, try to understand it a bit.

Allegra:

But starting with what's actually in front of you and impacting your business, would say is the most critical.

Chris:

I'd like to kind of follow-up on that a little bit. I think that's great guidance. I've been thinking about it kind of in my own space as you've been talking through it. And, you know, one of the the challenges, even if they're looking at even if the the person who's receiving the guidance is kinda looking in their own walls, is there's there's still we're moving so fast and you like like a genetic is the word of 2025. You know, I mean, it's it, you know, it was building up in the latter part of last year.

Chris:

And it's full on now. And, and as people but that's moving so fast right now. How do you get people to focus on that kind of useful thing, even if they're gonna stay within their own walls, and therefore they're they're kind of limiting the scope of what they're addressing to your point earlier? How do you get them to take it from that point of limiting scope to the point of of, like, finding points of productivity that are realistic and achievable, you know, within reasonable levels of time and resource on that? Because I I see all the time people struggling with that, you know, even within limited scopes, when kind of figuring out how to make those choices and make it real.

Chris:

And that's you know, I've I think, like, me, I've looked at Daniel as really, really good at that, It it coming down and and I think as you're out there educating the world, like how do you get people when you don't have a Daniel at a company? How do you get them to be able to focus on those different things to to to focus their resources on?

Allegra:

Yeah. So again, always back to business challenges. You should be putting your resources where you actually have areas that you want to make a difference in and or investing in research and exploration. And in that sense, you don't need as many barriers and you can have a team. The ones that are leading right now have already had best talent in research for the last years.

Allegra:

They're not just starting now and trying to build things. They've invested time in exploration and in failure and in learning. I think that's really critical. So you have that space. So again, it's not a rush decision of suddenly I need to understand this thing fully right now because we've deployed it already.

Allegra:

It's like we've made space and time to explore a concept fully and what it looks like when you build it out in practice, not just theoretically. So I think that's one thing. Another is the user experience. I was just walking through a wireframe with somebody who doesn't have a technical background at all, and they were asked to build out an agentic workflow, multi agent experience in financial services space. And they were having a really hard time with this, so they called me and were asking about it.

Allegra:

And as we walked through the experience, I was like, You can actually see very clearly what you do and you don't want. And as you reach each point of questioning or experience, you can address the concept that's being used in that moment. And it becomes a lot easier when you break it up that way and you can relate it very tangibly to what it's doing in practice. And what we're seeing actually with agents, when you build out the experience, people are like, Oh no, I don't want it to do that. When it has this, then I want to do this specific thing.

Allegra:

It's like, Okay, what you're describing is automation and deterministic outcomes. That is not a fully autonomous multi agent experience that you had in mind. And so you can actually come to that quite quickly and people can understand it a lot more simply when they see it in front of them in action. When you just have a million words in front of you and you have no way to know what that actually looks like when it's deployed and in a product, then it's not gonna make sense to you. And I don't know if that's an effective way to approach it.

Allegra:

So I think having this user experience thinking, again, like the product mindset when you're going through this will help a lot in grasping the technical terms.

Daniel:

Yeah. I think every time I've actually asked people about what they want with their AI agent, it turns out what they want is either a rag chat bot or an automated workflow. I think that's basically every time. I'm sure there's other cases out there. But, yeah, Allegra, I feel like we will definitely need to have a follow-up to the show to get more of your insights and continued insights from Lumiera.

Daniel:

As we kind of draw to a close here, I do want to give you the chance to kind of look forward a little bit, towards the future. We've obviously talked about a lot of challenges in terms of leadership and trust and implementing responsible AI practically. What, from your perspective, gets you excited about the future of how companies are adopting this technology or the possibilities of how they might adopt this technology?

Allegra:

Yeah. I think a really good one just happens to be our company vision, which is a future equipped for humanity. We had started with humanity equipped for the future, but we want to be human centered here and actually shape technology for what we care about and what we actually want to maintain about the human experience rather than having technology shape the ecosystem and our surroundings without us involved. That's really where I see the future going, is all of us being a lot more actively involved in the decisions we're making around AI, whether you're a leader or not. Then the other is that responsible AI just becomes what AI is.

Allegra:

It's the standard. You're not thinking about it as an add on at the end or something that feels like a hindrance or a barrier. It just is the standard when you're building. That's the future that I hope for.

Daniel:

That's awesome. That's a great way to end. Yeah, like I say, we'll definitely have to have you back because I feel like we could have talked for a few more hours. But thank you for the work that you're doing. Thank you for the way that you're helping leaders in this space.

Daniel:

And, we'll we'll definitely provide links in the show notes to, what what Allegra's working on and some some other talks. So make sure you check that out. And, yeah, talk to you again, soon, Allegra. It was great.

Allegra:

Thank you so much. All

Jerod:

right. That's our show for this week. If you haven't checked out our website, head to practicalai.fm, and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show.

Jerod:

Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the Beats, and to you for listening. That's all for now, but you'll hear from us again next week.

Confident, strategic AI leadership
Broadcast by