Cracking the code of failed AI pilots
Welcome to the Practical AI podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.
Jerod:Now, onto the show.
Daniel:Welcome to another episode of the Practical AI Podcast. In this fully connected episode. It's just Chris and I, no guests, and we'll spend some time digging into some of the things that have been released and talked about in AI news and and trends and hopefully spend some time helping you level up your machine learning and AI game. I'm Daniel Witenack. I am CEO at Prediction Guard, and I'm joined as always by my cohost, Chris Benson, is a principal AI research engineer at Lockheed Martin.
Daniel:How you doing, Chris?
Chris:Doing great today, Daniel. So much happening in the world of AI. Holy cow. Yes.
Daniel:Yes. Lots lots to catch up on. There have been a number of interesting developments in the news that maybe people have heard about and yeah, might be good to just kind of distill down and synthesize a little bit in terms of what you know, what they mean, what they signal, how people can think about how certain things are advancing. So, yeah, looking forward to looking forward to this this discussion. Any things that have been particularly standing out to you that that you've been hearing about, Chris?
Chris:Sure. I think the thing that has really been noticeable in recent weeks has been so many people that are both in the AI world and outside of it, but impacted like everybody is, are talking about jobs. And we've talked about the impact of AI on jobs many, many times on the show over time, but people are really, really feeling it at this point. The job market is pretty tight. I've talked to lots of people out there looking, whether they're currently employed or whether they're out of school.
Chris:And particularly, there's a lot of people in technology coming out of university that are really struggling right now. And I believe there was a report recently from MIT that highlighted that.
Daniel:Yeah. It's interesting that we spent a good number of years on this podcast, of course, talking about, you know, occasionally talking about some of the wider impacts of this technology, you know, within a certain company or industry. Now this is kind of like a global across all people thing, you know, all sectors. You see things being hit hard, especially in, you know, maybe it's sales and marketing or kind of junior developer type of cases. If I remember right, Chris, the MIT report that you're talking about, which we can link to some of the news articles about the MIT report.
Daniel:I don't think I've actually seen the actual MIT report yet, so I guess our listeners can keep that in mind. But one of the things that I've actually heard on multiple calls, And I was at, last week, at a AI corporate type event where there were a bunch of corporate leaders and they were certainly talking about this. One of the things talked about in the MIT report was that ninety five percent of AI pilots fail. And that I think has generally spooked a lot of business leaders, investors, lots of different people across industry, just the level at which these kind of ninety five percent of AI pilots fail. What's your thought on that?
Chris:I think it's a weird juxtaposition right now that we're in, in that that's accurate, that you're having a tremendous number of Gen AI, in particular, efforts fail. But at the same time, companies are holding back on hiring junior devs out of school. And so you have this weird mesh of people going hardcore on trying vibe coding and things like that, but with very limited success, struggling to get it adopted. And at the same time, they're they're making bets on the future by not going ahead and bringing in the the junior level developers that that they always have, which kind of leads to an interesting kind of what if situation in the months ahead. If you're, you know, junior developers eventually, historically have turned into senior developers.
Chris:And right now, companies are betting on those senior developers with these new AI capabilities over the last couple of years to make up for that deficit, hoping to save money. But if you're failing 95% of the time, it puts things into an interesting place.
Daniel:Yeah. One of the things that I I'm just reading one of these these articles about the the AI pilots. And one of the things that it highlights is that this isn't from the report's perspective, it's not that the AI models were incapable of doing the things that people wanted to prove out in the AI pilots, but that there was a kind of major learning gap in terms of people understanding actually how an AI tool or workflow or process could be designed to operate well. So I find this very, very prevalent across all of the conversations that I have, especially in workshops and that sort of thing, is there's kind of this disconnect and people are used to using these kind of general purpose models, maybe personally. And there's this concept that the way I implement my business process with an AI system is similar to the way I prompt an AI model or a chat GPT type interface to summarize an email for me.
Daniel:And that is always due to create some pain. Number one, because these AI models only sometimes follow instructions. But number two, your business processes are complicated, right? They're complicated. They rely on data that is only in your company and probably has never been seen by an AI model unless you accidentally leaked it.
Daniel:Jargon, all of those sorts of things. And number two, these business processes that people are trying to automate or create a tool around, often the best thing for that is not a general chat interface, right? It's not like you want to create a chat interface for everything you wanna do in your business. No, actually in this, in a one particular case, right, it may be that you wanna drop a document in this SharePoint folder. And when it's dropped in a SharePoint folder, it triggers this process that takes that document and extracts this information and compares it to information in your database and then creates an email and sends out an email to the proper people or, you know, add something or These sorts of processes are not general chat interfaces.
Daniel:So people are coming at it like, Oh, I know how to kind of prompt these models to do some things. And so they try to kind of build or prompt these models in a certain way without the kind of I don't know why there's such a disconnect, but without the understanding that really what they need is maybe a custom tool or automation. They need data integration, data augmentation to these models. They don't just need a model plus a prompt. And I think that that's a pitfall that I see unfortunately very often.
Daniel:So it's not super surprising for me to see this kind of highlighted in the MIT report.
Chris:Yeah, I agree. I think to your point, these chat interfaces, it's kind of becoming the universal hammer in most people's heads and everything is starting to look like a nail for that hammer to hit and they are neglecting a toolbox full of tools that give them the right software components for getting their workflows put together the way they want. And so yeah, I think I think it's a I think it's a certain amount of of over expectation that is then exacerbated by abated by choosing the wrong software approach, or an incomplete software approach to try to get the job done, that workflow, that business workflow done. So that's certainly my sense of it. I think a lot of these are driven top down, you know, by excited executives that haven't taken the time to really understand how to optimally use these tools.
Daniel:Yeah. Yeah. And another thing that was was kind of interesting in the study is this kind of just prompting of the models generally kind of failed. There's kind of a knowledge gap on how to integrate data and how to build custom solutions in a way that could succeed in kind of a POC sort of thing. But I actually don't kind of agree with the premise of the, you know, that this should spook investors, from kind of AI, maybe some AI companies, but AI companies that especially are verticalized in general.
Daniel:I'm part of an AI, I lead an AI startup, so I may be biased, but our AI startup isn't one of these kind of verticalized application layer things. So I feel like I can maybe speak objectively with respect to those. It's really these kind of AI companies that are in whatever it is, healthcare or the public sector or education or finance or that sort of thing. They are putting in the work, I think, at least many of them are, to understand business processes and build robust AI workflows and kind of tools that are fitting certain business use cases that sort of thing. And I think one of the stats in the report were that a lot of the kind of trials of these sorts of tools did actually succeed kind of a majority of the time.
Daniel:And so there's this kind of major gap between on one I guess in summary, what I'm trying to say is there's this major gap between on one side, you have this idea that all you need is access to a model. And then on the other side, these kind of pre built, purpose built AI systems for particular verticals that understand the business processes. There's a whole kind of gap in the middle because many companies, especially in the enterprise setting, will have to customize something. I don't think in the end they will be able to always use a tool off the shelf that works completely for them. If you look at any enterprise software, it's always customized, right?
Daniel:At some level, whether it's manufacturing software, ERP software, CRM, whatever, it's always customized. I think that's where maybe this is highlighting that gap of companies not understanding the gap between having a model and a verticalized solution for their company. And that actually does require significant understanding of how to architect and build these systems, which unfortunately there's a skills gap and a learning gap in terms of people that actually have that knowledge.
Chris:I think you're drawing out a great point there. And I know we've talked a little bit about this in previous shows where the model constitutes a component in a larger software architecture. And we know, as you just pointed out, the expertise of those business workflows being integrated into vertical software stacks, where it is designed to solve the problems and not just a chat box, is really important to getting to a good solution that works for your business. I think that there is I think this is where one of those challenges that we're seeing in in a lot of folks out there in the business world is kind of forgetting that that core tenant and leaping straight for the model will run my business from this point forward without that without that supporting infrastructure. So maybe there are some hard lessons to be learned in the days ahead for some companies, but hopefully hopefully that will that will happen.
Sponsors:Well, friends, when you're building and shipping AI products at scale, there's one constant, complexity. Yes. You're wrangling the models, data pipelines, deployment infrastructure, and then someone says, let's turn this into business. Cue the chaos. That's where Shopify steps in whether you're spinning up a storefront for your AI powered app or launching a brand around the tools you built.
Sponsors:Shopify is the commerce platform trusted by millions of businesses and 10% of all US ecommerce from names like Mattel, Gymshark, to founders just like you. With literally hundreds of ready to use templates, powerful built in marketing tools, and AI that writes product descriptions for you, headlines, even polishes your product photography. Shopify doesn't just get you selling, it makes you look good doing it. And we love it. We use it here at Changelog.
Sponsors:Check us out merch.changelog.com. That's our store front, and it handles the heavy lifting too. Payments, inventory, returns, shipping, even global logistics. It's like having an ops team built into your stack to help you sell. So if you're ready to sell, you are ready for Shopify.
Sponsors:Sign up now for your $1 per month trial and start selling today at shopify.com/practicalai. Again, that is shopify.com/practicalai.
Daniel:Well, Chris, there's a couple things that, I've been following with respect to the the model builders, but it does actually connect maybe to this MIT report as well, because one of the other kind of common things that I see that companies or a way of thinking that companies have when they approach kind of AI transformation is they come at the problem of AI adoption kind of with the question of what model are we going to use? Which I think is the completely wrong question to be asking for a number of reasons. First of all, if you're coming to AI for the first time with your company and you want to transform your company with AI and build knowledge assistance and automations and adopt other tools and build verticalized solutions, the model actually will shift over time a lot. So that's, I think number one, there's no one kind of on the market that at least right now is there's certainly many providers of models. There's a lot of good models.
Daniel:No one knows who will have kind of the edge on models in the future. And I think what we're actually seeing is that the model side is fairly commoditized. You can get a model from anywhere. The second reason I think that that's the wrong question to be asking is that if you're trying to build an AI solution within your company, again, think about that SharePoint thing that I talked about. I'm gonna process this document from SharePoint and extract these things and send it to an email.
Daniel:You actually don't need a model. You need a set of models and potentially, you know, other things in the periphery around those models. So you likely need a document, you know, structure processing model, like a Dockling. You need a language model to process pieces of that. You maybe need embedding models to embed some of that text or do retrieval.
Daniel:You need a re rank model because you've gotta re rank your results after doing retrieval. You need safeguard models because you want to be responsible and check your inputs and your outputs of your model. So once you start adding these things up, even for that simple use case of processing this document through SharePoint and out the other end, If you're coming to one of these proof of concept projects like we've been talking about, and you're thinking, what is the model I'm going to use for this? And you decide, okay, the model I'm gonna use for this is a GPT model or a LAMA model or whatever. Well, you're already setting yourself up for failure.
Daniel:Because what you don't need is a model. What you need is a set of models. You sort of need an AI platform. You need an AI system that gives you access to multiple kind of different types of functionality, right? And so I think that that kind of perspective plays into this kind of POC's failing thing.
Daniel:And I have more thoughts about that related to the model builders, but do you think I'm off in that? Oh, How would you correct me?
Chris:No, I have no correction. We've talked about this as well a bunch of times, and I keep waiting for I think we've had a lot of really in-depth conversations with people on the show in the past about the need for multiple models to tackle different concerns within your larger business focus and the software architecture that supports that. And I think as you look forward, and you know, we're seeing a genetic AI, we're seeing physical AI developing more and more. And all of those require a number of different models to do that, you know, it kind of obviously, you know, very different distinct things that you can see in those spaces. And so there seems to be this hang up in the public about the model, which model and then do I pick the model and as you pointed out, that's completely the wrong thing.
Chris:It's what does your model architecture look like as related to your business workflows and and the the, you know, the job that you need doing and how do you do that. And as you were describing some of those potential models that one might have in a flow earlier. As I was listening to your examples, I was thinking, wow, sounds a lot like software architecture, you know, just it's just each each component is invested with with one or more models now in that component. But there are still many components that make up a full business workflow. And so I guess maybe because we talk about it fairly regularly on the show here, it seems quite obvious to me that that's the case.
Chris:But clearly, it's not. If you look at the business decisions that are being made out there, some of which is there is clearly a need at this point. You may have a senior developer type by whatever title you're applying, working and kind of sort of knowing software dev at some level and sort of vibe coding and putting their knowledge of architecture together for solution. But if you're not going to bring in junior devs, then you're making a gamble that you're not going to need that at some point. And yet what we're seeing is a is, you know, per that report, ninety five percent failure rate on using these tools at the current point in time we're at now on that.
Chris:We had a recent episode on risk management. And from a risk management perspective, I think that there are a lot of very risky decisions being made by executives largely in ignorance, I think, of kind of understanding how models and software architecture fit together to your point. So no, I'm in violent agreement with what you were saying a moment together.
Daniel:Yeah. This idea also that you kind of bring in a model also produces a little bit of problematic behavior around adoption of models, particularly in private kind of secure environments. And I know this one from experience where you kind of think, well, which model am I going to use? And then you think, well, there's a couple categories of models, right? There's closed models, there's open models, like closed models being the GPTs or Anthropic or etcetera.
Daniel:The open models being the LAMA or DeepSeek or Quinn or whatever. And you have smart people in your company and whoever, Frank over in infrastructure is like, Yeah, I can spin up a LAMA model in our infrastructure. And there's innumerable ways to do that at this point. You can use VLLM or you can use OLAMA or whatever it is. Right?
Daniel:I can spin up one on my laptop. And so you spin up the model and then you're like, all right, well, let's build our POC against Frank's model that he's deployed internally because we now know how to do that. But again, it's not so much that Frank did a poor job and the deployment is bad or the tools are bad, like VLM or something is very powerful, but it's not a proper comparison because what you've deployed is a single model, not a set of AI functionalities to build rich AI applications, you now have a private model, which again, only does what a private model does, that one particular model. It doesn't give you that rich set of AI functionalities. And so it's not really a knock against open models.
Daniel:What it is an indication of is that you maybe shouldn't roll your own AI platform. So there's a lot of things that go into that and there's various ways to approach it. But I think that misunderstanding of what model do I need also impacts the perception of these open source models because most of the time when you deploy that open source model, you're only getting a single model endpoint versus kind of a productized AI platform.
Chris:Yeah, I guess, I know that in your own business that you do help bridge some of that gap there and some of the things that you guys do. But in general, if you're talking in the broader market, do people they have service providers out there that can give them some of those services. But as they are going and deploying, and in a sense, we've always encouraged open source and open weight models out there. We've talked about that a lot, and we like to see that. And yet there is this skill gap or understanding gap that you've just defined in the business community of, yes, you have these capabilities, but you've got to connect all of your resources, brought used in the term in a very generic way, all of your resources together to give you the capabilities you need for your business to operate the way you envision.
Chris:And, you know, that's definite falling down in understanding within that gap space. What are some of the different options, how people can get through that gap?
Daniel:Well, I think one of the things that can be done is to approach this sort of problem from a software and infrastructure architecture standpoint to what you were saying before. A lot of what we're talking about really kind of falls in that architecting side of things. And so I think from the beginning, the question is not, again, if you come to it with the question not being, what model are we gonna use and who's gonna deploy it internally? Right? But you come to it from the standpoint of we will be using models.
Daniel:There will be many models. They will be connecting to many different software applications. Okay, well, changes the game a little bit in terms of managing that and making it robust over time. And there's very many capable engineering organizations that know how to scale bunches of services and keep them up and set up uptime monitoring around them and alerting and centralized logging and all of those sorts of things. But you never get to have those conversations if you kind of cut it out before you get there by just saying, we will have a model living somewhere.
Daniel:And so you really need to approach it from this distributed systems standpoint. And once you start doing that, you start talking to the experts that are on your team. And there are very many tools, depending on the standpoint that the company wants to approach this from, everything from the company still managing a lot of what they want to deploy and using orchestration tools, whether that be like a rancher or something like that, that's generic and not AI related, right? But they're used to using it. Maybe they're already using it in their organization and they can orchestrate deployments of various AI things.
Daniel:And then there's AI specific approaches to this as well. So I think it is really a perspective thing. And as soon as you kind of get into that zone of, well, we do need this software architecture, we need this kind of SRE and DevOps kind of approach to things, then you really have to ask some of the hard questions like, can we vibe code our way through this? What kind of software knowledge do we need to actually support this at scale? And I think what people will find is you do at the minimum, I think still need that software engineering and infrastructure expertise to do it at scale well, or at least to kind of guide some of the Vibe coding type of things that happen.
Daniel:Right? So there needs to be an informed pilot to help guide some of these things and make sure the ship is going in the right direction.
Chris:I think that's very, very well put. And I think, you know, kind of going back to the fact, you know, kind of combining this with the hiring decisions that we're seeing out there in the job market, and kind of the collapse of the bottom end of the software dev industry, there is a lot of developed expertise over the course of a career. And while all of the discrete points of knowledge may be captured in various models out there, there's still the necessity of extracting what you need from those models in the right context and the right order. And at least for the time being in this kind of Gen I dominated world, you have to have somebody who can provide that kind of architectural view, know how to provide the context to get the things you need from your five coding. I think people are finding small successes under that where they say I want an app that does this thing and they describe the app in great detail and the models that they're using will turn out kind of an app.
Chris:It may or may not be architected the way for sustainability and you know, there's a whole bunch of issues that might make a very good prototype. But if you're not going to bring in junior level coders that in the future will be your senior level coders that have this knowledge, then you're kind of betting on today's talent producing something and you're hoping that your model gets the nuance of all of those components and is able to generate its own context without your expertise, which may happen. But it's a big gamble if you're a company right now, it seems to me a lot less risky to go ahead and continue to bring in some junior level developers for the purpose of growing them over time and being able to have that. Maybe at some point that does change in the future. But I think the companies that are doing that today are taking some some fairly significant business risks that are largely invisible to their executives.
Sponsors:Well, friends, you don't have to be an AI expert to build something great with it. The reality is AI is here. And for a lot of teams, that brings uncertainty. And our friends at Miro recently surveyed over 8,000 knowledge workers and while 76% believe AI can improve their role, most, more than half, still aren't sure when to use it. That is the exact gap that Miro is filling.
Sponsors:And I've been using Miro from mapping out episode ideas to building out an entire new thesis. It's become one of the things I use to build out a creative engine. And now with Miro built in, it's even faster. We've turned brainstorms into structured plans, screenshots into wireframes, and sticky notes chaos into clarity all on the same canvas. Now, you don't have to master prompts or add one more AI tool to your stack.
Sponsors:The work you're already doing is the prompt. You can help your teams get great done with Miro. Check out miro.com and find out how. That is miro.com, mir0.com.
Daniel:Well, Chris, I I do think that there's some consistent news stories from other sources outside of the MIT report that kind of reinforce some of what we've been talking about. And I also, I think are just generally interesting as individual data points. And I've seen a number of those as related to OpenAI specifically in terms of if we just look at what kind of has happened with OpenAI in the previous number of weeks, some interesting things have happened. And I think that they signal some things that are, like I say, very consistent with what we've been talking about as prompted by this MIT report. Just to highlight a few of those, and then we can dig into individual ones of them.
Daniel:But one of the things that happened was OpenAI released GPT-five, which we haven't talked about a ton on the show yet, but they released GPT five. Generally, the reception in the wider public has been that people don't like it. Sort of it's fallen flat a bit, guess would be a way to put it. So that's kind of thing one. At the same time, OpenAI open sourced some models again for the first time in a very long time.
Daniel:Five Open sourcing a couple of reasoning models, LLMs that do this type of reasoning, and they open source those. And also near the same time, I forget the exact dates, someone listening can maybe provide those in a comment or something. But the other thing that happened was that they opened kind of a consulting arm of their business and are entering into these services consulting type of engagements, which are not cheap. I think the minimum price for a consulting services arrangement with OpenAI was like $10,000,000 or something like that. So you've kind of got this thing that's happening, which is a model that's kind of in this area of what has been their moat, kind of these closed models, kind of falling flat, them giving out some of the model side you know, publicly, openly, and then opening the services business side of things.
Daniel:Now, I've drawn my own conclusions in terms of what some of those things signal and mean, but any initial reactions or thoughts that you've had as things have come out like that, Chris?
Chris:I think Sam Altman is the CEO of OpenAI. He's a curious individual with, he noted in January that maybe OpenAI had been, and I quote, on the wrong side of history, close quote, when it comes to open sourcing technologies. But it was Sam who made those decisions. And I think what he's seeing now is the market is evolving, it's maturing, as as you would expect, you know, kind of the the early phase of focusing on these foundation, you know, these kind of frontier foundation models that were driving the hype for the last few years and might be producing diminishing returns. Even though the GPT-five model is more capable and that's what I use for most of my stuff, Maybe some of the nuances, for instance, the interface itself, way it works, the way the models work, know, people were preferring the four o model, you know, there was quite a quite a bit of personification of that model, I think going on with the public.
Chris:And that I think think OpenAI realized that, you know, there are these concerns, in addition to the fact that they had kind of left the services market to others. And so, you know, I, my, my belief is that the is that they are starting to open source some of these models with their, you know, open weights, for the purpose of supporting a solid footstep. And at the, you know, kind of the premier end, the expensive end of the services market. And I think, I mean, I think that's the motivator right there is is is doing that. I think that's making sure that with their competitors all having open source models that they can play in the space as well.
Chris:And they can go in with their services organization and make money on services and and point to their own open source models to be able to support that services business model that they're doing. So maybe a little bit of a jaded answer for me potentially just having like you watch them over a number of years month by month. But yeah, I I would I would definitely say they're they're trying to lean in recognizing that the the business of AI is both expanding and and maturing into that area.
Daniel:Yeah. I I think if we combine this with the knowledge from the MIT report around kind of these use cases and enterprises failing, What we know at this point and what I would kind of distill down as very or or or trends and insights that are backed now by various data points is that generic AI so the generic AI models, just getting access to a model does not solve any kind of actual business use cases and problems and, you know, verticalized solutions that are needed. That's what we kind of learned from the MIT report. And this would be true whether your company has access to ChatGPT or Copilot or whatever models you have access to. These are generic tools.
Daniel:These are generic models. It's very hard to translate that into customized business solutions. Right? That is why the insight one from my perspective is just having access to a generic model or generic tools is not gonna solve your business solution. And that's partially why a lot of these POCs are failing.
Daniel:Now, OpenAI offers those generic models and tools, right? Which is really great on the consumer side, but enterprise wise, the ones making the money, at least so far, not it hasn't been OpenAI. They've been losing a ton of money. The ones making the money is Accenture, Deloitte, McKinsey, etcetera, services organizations. Because really how you transform a company with AI and bring these models in and do something is by creating custom data integrations, creating these custom business solutions.
Daniel:And that is still really a services related thing, or it's at least a kind of customization related thing. There's data integrations there. So this is totally consistent for me with open sourcing models at the same time that they're creating the services side of the business, because essentially from the business or enterprise side, there really is not a moat on the model builders front. It doesn't matter from my perspective at least, and of course I'm biased, it doesn't matter if you're using GPT, it doesn't matter if you're using Quad, it doesn't matter if you're using Lama or DeepSeek or Quen, really doesn't matter. Any of those models can do perfectly great for your business use case solution.
Daniel:I think that's true. I've seen it time and time again. What makes the difference is your combination of data, your combination of domain knowledge, integration with those models and creating that customized solution. And either you're gonna do that internally or you're gonna hire a services organization to do that. On the one front, you need software architects and developers.
Daniel:And even if they are using vibe coding tools, will need that expertise. On the other side, you can pay millions of dollars to one of these consultants or to OpenAI and their services business, etcetera. And again, it's a hard thing because those resources are scarce, right? Which I think is why it is a good time if you're kind of providing that level of services around the AI engineering stuff.
Chris:Yeah. I think you've hit the nail on the head. And I'll offer sort of a way of restating it with an analogy. You know, when you you're going to have friends over and you want to have a magnificent dinner at your dinner party. And so you walk into the kitchen, and you may have a lot of great things to make stuff with and some of those might be big expensive things that are raw materials.
Chris:Things like, you know, in our analogy, those things represent models and other software components. But there's some skill in putting that meal together and going into the refrigerator and picking the right things out, and going into the pantry and picking the right things out, and putting them together according to a recipe that is your business objective, and understanding how to produce that final dinner, which is maybe a little bit different from the way your neighbor would do it and maybe a little bit different from the way another friend would do it, to produce that fine meal that you are able to enjoy at the end of the day. That meal is a bit unique because in our analogy, your business is a bit unique. But it takes the skill. And we do expect technology to develop in those refrigerators to be smart refrigerators other things to help in the kitchen.
Chris:And that might be represented in our vibe coating thought. But we might not be all the way there yet. So if you're kind of buying your ingredients and thinking, well, I don't really need to have great skill in the kitchen, because I'm sure that some of this technology that's coming into play will take care of that for me. Maybe eventually, but I don't think we're quite there yet is what we're seeing. And I think that report that we've been talking about has kind of provided some evidence of that fact.
Chris:And so, yeah, there's still there's still the need for nuance and complexity to be addressed. And the recognition that these commoditized models, whether they be closed source or open source, either way, it's going to take more than one and they're going to be there, you're going to need to have the recipe to make it all come together the way you're envisioning. So a lot of good lessons for hopefully some of the managers and executives in these companies making some of these decisions to do that might help them out going forward.
Daniel:Yeah. Yeah. I that analogy. And it fits so well because you can develop that cooking expertise internally. Or you can hire in professional chef into your house.
Daniel:It's gonna be expensive, right? But you can do that, but it is a necessary component. So I love that analogy. I do wanna highlight, we always try to highlight some learning opportunities for folks as they're coming out of this. Maybe you're motivated to not let your AI POC fail and you want to kind of understand what it takes to build these solutions.
Daniel:There's a couple of things I wanna highlight. One is I'm really excited about, we're having this, Midwest AI Summit in, November, November 13 in, Indianapolis, and I'm helping organize that. It's gonna be a really great time. One of the unique features about this event, different from other conferences, is we're gonna have like an AI engineering lounge where you can actually sit down at a table with an AI engineer. Maybe you don't have that expertise in house, but you don't want your POC to fail, you can actually sit down with an AI engineer and talk through that and maybe get some guidance.
Daniel:So I haven't seen that at another event. I'm pretty excited that we're doing that. And you can always, as I mentioned in previous episodes, go to practicalai.fmfmwebinars. There's some webinars there as well that might be good learning opportunities for you.
Chris:That's awesome. And on the tail end, as we close out of learning opportunities, I just wanted to share one two second thing here. My mother, once upon a time she's in her mid 80s. And once upon a time was a computer science professor at Georgia Tech. She also happened to work for the same company I worked for, Lockheed Martin, years ago.
Chris:But she had retired and kind of moved out of the technology space. But she is very aware of what I do in the space and our podcasts and stuff. But she, in her mid-80s, reached out to me this weekend and said, I'm thinking about going back to school for AI, and maybe even into a PhD program or something like that. I don't know. And we talked about it for a while and starting small.
Chris:She's into some Coursera courses now. And I just as we're thinking about learning and ramping up, I just want to, you know, we've talked about learning recently on the show, you know, we had a couple of episodes where we talked about kind of it's never too late, we've had some, we had a congressman who was not a spring chicken not too long back, diving in incredibly inspirational. And I want to say, if my mom in her mid eighties and decades out of the computer science space is willing to dive in and do technical work on Coursera courses, I would encourage all of you to reconsider, you are never too old. And I just wanted to leave that as we talked about learning items to say, go get it. The world is changing fast and my mom in her mid eighty's doesn't want to get left behind and wants to be on top of it.
Chris:And I think, I think it's a good thing for all of us to take some inspiration from and go do.
Daniel:That's awesome. Appreciate that that perspective, Chris. It's been a been a fun conversation. Thanks for hopping on. Absolutely.
Jerod:Alright. That's our show for this week. If you haven't checked out our website, head to practicalai.fm, and be sure to connect with us on LinkedIn, X, or Blue Sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show.
Jerod:Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the beats and to you for listening. That's all for now, but you'll hear from us again next week.
Creators and Guests
