Workforce dynamics in an AI-assisted world

Jerod:

Welcome to the Practical AI podcast, where we break down the real world applications of artificial intelligence and how it's shaping the way we live, work, and create. Our goal is to help make AI technology practical, productive, and accessible to everyone. Whether you're a developer, business leader, or just curious about the tech behind the buzz, you're in the right place. Be sure to connect with us on LinkedIn, X, or Blue Sky to stay up to date with episode drops, behind the scenes content, and AI insights. You can learn more at practicalai.fm.

Jerod:

Now, onto the show.

Daniel:

Welcome to another episode of the Practical AI podcast. In this fully connected episode where it's just Chris and I, we'll, dig a little bit into the latest news related to AI, hopefully share a few of our opinions and and maybe some little nuggets that will help you level up your machine learning or AI game. I'm Daniel Witenack. I'm CEO at Prediction Guard, and I'm joined as always by my cohost, Chris Benson, who is a principal AI research engineer at Lockheed Martin. How are doing, Chris?

Chris:

I am doing splendid today, Daniel. How's it going with you?

Daniel:

It's going great. It's been a productive week. We've had customer visit and lots of interesting development stuff. And last week did a workshop. So lots of hearing all the different kind of archetypal use cases that people are working on.

Daniel:

Lots of interesting questions. I think we're also in a hiring phase right now, which means I am thinking a lot about kind of what positions to hire in, especially in light of all the AI stuff that is happening and everyone talking about, oh, maybe you should consider AI could do this task before you hire someone in to do that. Are those discussions happening in your circles as well?

Chris:

Oh, they are. And I hear that a lot of people are asking me about questions like this, increasingly. And it's funny that you bring that up. I was doing another podcast as a guest. And some of those questions came up in that podcast asking about kind of how we saw that.

Chris:

I think things are getting so fast. Know, we've gone through several years of big LLM experience and now the agentic world followed rapidly by a physical world and people are really, like, paying attention to how their lives are starting to be impacted by this with the with jobs being right at the top of that list and that, you know, concerns on what it is and what should I do and will I be replaced? And it's a mixture of excitement and fear out there.

Daniel:

I know that you can't share things necessarily from your immediate actual day to day to day job always. But let's say Chris Benson in a fictitious world and you were Well, I guess you also have the nonprofit side of what you're doing, but as you're thinking about As you would be thinking about in this context, like the world we live in now, knowing not the promise of what AI might do next year, but just based on capabilities now, the tools that you're seeing out there, and you are building maybe a small team for a new company, what do you think is kind of on the, maybe positions or things that might be on AI, which may have traditionally been a full time or a part time hire? What is your thought?

Chris:

Yeah, the sense I'm getting from my interactions with people what I'm observing is still using AI mostly as a tool to empower people. Though that might mean that there are fewer people in a given function, you know, than there might have been before. You know, you like there's some really obvious examples that we've all seen like like marketing material and using AI to, to generate that and stuff. And so, you know, I think I'm not seeing AI, like completely replace lots of positions right now. There's probably some here and there.

Chris:

But I think I think that companies are looking for efficiencies. And, you know, if a marketing department could be two people with AI tools instead of five people in a company, I think that's a really common approach that that you're seeing right now, you know, today and and over the last couple of years in terms and I think there are many, many instances of that, you know, that that small efficiencies, small functions that are being, know, our task level things that are being offloaded, but which by doing that reduces total headcount to some degree.

Daniel:

And I guess there's things that maybe people can do faster. So to your point, there's maybe fewer of a particular type of position that is needed. Maybe it's marketing or even development or other things. I also find it interesting that there may be things that people could do that in a certain position that they couldn't maybe do before and might have to hire like an outside agency or services organization to do, which is an intriguing part of this. So just thinking of advertising, PR, market research, those of course are things that you mentioned, but also there's kind of prototyping new software things, you know, proof of concept that maybe you would have to engage an external consulting firm or software dev shop, right, to do before that maybe you, even with, fewer years of development experience or less exposure could knock out internally?

Daniel:

What's your thought on that kind of interplay with service providers and how I guess this world may impact the service provider side of things.

Chris:

So yeah, with service providers, I think it's an interesting story. I don't bring it up very often in our episodes, but before I was in the defense world, I spent a dozen years in the digital marketing world. And, you know, as a as a CTO, or other technical positions and stuff. And I think that it's really tough with these new AI tools to be in that industry at this point, because there's so much capability, you know, we've commented many times that, you know, it used to be once upon a time, we would say, AI would take care of all the grunge work and leaving humans for the creative work, but we've actually kind of seen the opposite in some cases. I think the the creative aspect of some service industries like that, you know, compared to AI tools, which may or may not be able to do it at the same level, but they might be able to do it good enough, you know, we, you know, in software development, we often talk about good enough.

Chris:

And you mentioned prototyping a moment ago. And and I think, you know, with the notion of vibe coding, making at least simple coding efforts, you know, for things like prototyping accessible to people who normally, historically would not have been able to do that is also a bit of a game changer. And that means that those are that's another area where hiring externally or hiring that function into your organization is also reducing because you don't need that necessarily just to get a prototype. It may be different for writing production systems. And even then AI tools, as we know, have have had a had an impact.

Chris:

So I think I think the impact on that can be debated on whether it's positive or negative and depending on who you talk to. But these tools are definitely changing many industries right now. And and they may not be completely knocking out an industry, but they are impacting it in terms of its viability from a revenue standpoint and such. So yeah, we're, I mean, it's, it's a concern. And I think most people start off thinking, well, most people that are kind of, you know, AI cognizant on a day to day basis, start off thinking, what tools do I have available with these new AI services and the models that I might host ourselves at our company can I do now that maybe I couldn't have done before and how can that change my cost structure?

Daniel:

Yeah. I'm really intrigued by this service provider side of things actually, because on the one side, there's maybe traditional things that people would have gone out and hired an agency to do. So maybe that's a new website for your business, right? Or, you know, simple website for your business or some type of marketing assets or maybe a prototype of a project, like a SaaS product, or I don't know, a bunch of different things. Those things seem to be good targets for kind of the either vibe coding or kind of wrapper type platforms around certain AI systems.

Daniel:

So, you know, for the very first I think the very first one of our websites that we made for Prediction Guard, I just use one of those sites where it's like, create your website with a prompt. You just go in and prompt the thing. Like, I want a website for this thing, and it kind of generates it and you move around a couple things and there you go. So I think that those types of things, the sort of non AI things that are generated by AI are very much maybe there's less of a need to go out and hire an agency or a services company for those things. At the same time, I do think it's interesting that if you look at building AI things for your company so let's say you are a mid or large size company and you want to create this AI tool for your company, right?

Daniel:

The talent is so scarce for that sort of thing that really I mean, if you look at things as a whole, OpenAI or companies like that are definitely not the companies making the most money off of AI. OpenAI is losing a lot of money. The companies that are making a lot of money off of AI are Deloitte, Accenture, McKinsey. Yes. Right?

Daniel:

Those are the companies absolutely raking it in off of AI, right? Because there's all of these companies that say, Well, we want to transform our company with AI, so you need sort of change management and you need to create these prototypes and maybe these AI tools that you're going to deploy across your company. You need to take care of the security privacy concerns. You need to have a strategy around it, a roadmap, like all of this stuff. And that is all very much tailored to what these services organizations do really well, right?

Daniel:

And I think even smaller MSP type of, companies that have a book of business in healthcare or other industries where the impact of AI is clear, they have a big win to have in in this area too. So I find it really interesting on the services side where some things maybe are getting a little bit eaten up by AI, but the actual AI stuff, the AI transformation is actually very ripe for services organizations. It's interesting.

Chris:

It is. I mean, I guess people are looking for guidance. They don't understand how, and there is of course this whole industry of consultants who are happy to tell you they have it all figured out.

Daniel:

So, and sure that much of that they are very much helping people, but they are well positioned to do that. And that is kind of their business model, right?

Chris:

Indeed it is. I did a brief stint at Accenture as well. I can attest to that.

Daniel:

Yeah, yeah. So maybe it is a bit of a mixed where if you're an MSP out there, maybe you're, you're you're thinking that some things are are going away, but there is maybe a big opportunity for you to go into this sort of set of offerings around AI services.

Sponsor:

Well, friends, if you want to build the future of multi agent software, check out Agency. That's AGNTCY. It's an open source collective building the Internet of agents. It is a global collaboration layer where AI agents can discover, connect, and work across frameworks. That means better tools for you, standardized agent discovery, seamless inter agent communication and modular components to scale your multi agent workflows.

Sponsor:

And they're teaming up with Crew AI, Langchains, Cisco, and many more dropping real code specs and services with no strings attached. Start building alongside engineers who care about high quality multi agent software. Learn more at agency.org. Again, that's agntcy.org. That's agency.org.

Daniel:

Well, Chris, we started talking a little bit about influence of agents and that sort of thing on workforce. There was a a very interesting story that came out that intrigued me as related to this because basically everyone's saying, you know, 60% or 80% or whoever's giving the percentage will say, you know, the majority of development work is now gonna go to AI systems. And it was very interesting because there was a story that came out. Mean, it was reported by multiple folks, I think. But, this one I'm looking at is from Fortune and the title is, An AI powered coding tool wiped out a software company's database, then apologized for a catastrophic failure on my part.

Daniel:

And this was, I believe it was Replit that, is is the the culprit here, which is a really amazing tool. You know, they have I think there has been apologies and, you know, of course, they they're handling this in one way or another. But I'm sure this is not the only instance of this across these kinds of tools, but also it's a it's maybe a bit of a wake up call for some people, to understand that maybe these tools are moving at a little bit of a faster pace than what might be reasonable for companies to support just operationally and permissions wise and at scale, etcetera, etcetera? Any thoughts?

Chris:

My first thought was, was, you know, lack of guardrails, lack of, of, consideration on that. And I don't know the details, any more than you do, you know, other than what was in the article. And maybe part of my perspective is biased, because of the industry I'm in being in defense. We're really, really cognizant of safety measures and guardrails and such like that in our industry, because obviously, it can turn into a problem if you didn't have that. But I think other industries sometimes may need to to stop and think and I I get that the marketplace is moving really fast and there are competitors that will eat your lunch quickly and you wanna get to market with the best thing.

Chris:

But kind of at this point, you know, we're still fairly early stages in a genetic AI, you know, a lot of frameworks are still under development. This is and so I think having a, you know, kind of, it's a good thing to ask yourself what happens when things don't work out per your expectations. What are the guardrails for when things really go bad, and maybe think kind of a little bit of maybe a little bit old fashioned in terms of you know, what can you do for data redundancy and backup and things to where if your AI experiment or effort does go off guardrails that you have a guarantee that you don't have real damage done in a in a large sense. After that something that literally has no permissions to. So there's probably some mitigating factors there that I'm not aware of.

Chris:

So I'm not trying to lay in on the company, but I would just urge people to, to think about worst case scenarios and and maybe architect spend a little time architecting for that. And then and then go do some cool stuff, but make sure you can fall back without blowing your company up.

Daniel:

Good point. What do you think One of the things I've been wondering around this is there's all of these tools coming out, vibe coding tools, vibe marketing tools, vibe design tools, or even HR hiring, recruiting, etcetera, name any function. There's a way to do it with prompts or these sorts of tools now. How do you think this influences the type of both education and professional development that people need to have coming into a job? Because obviously there was maybe And again, we only know what we know about this one instance of the database being dropped or that sort of thing.

Daniel:

But I'm guessing that maybe, and people can correct us on social media if I'm saying something wrong here, but I think in a scenario like this, regardless, there's probably some of this that's on the company using the tool. And like you say, the trust and the guardrails that are put in place around kind of how things are pushing to production and how they're iterating on their platform or code or that sort of thing. And there's part of it that's probably on the platform itself and how it's architected and failure modes and safeguards and that sort of thing. But part of it I think is, well, should we train our new workforce such that they're ready to kind of team with AI systems in any role, one or the From technical to non technical roles. And how do people in existing roles that maybe haven't spent their career using these tools, how should they professionally develop themselves to be proficient in this new space?

Daniel:

Any thoughts?

Chris:

Yeah, it's something I think about quite a lot on several factors. There's my own career, we're working through things and things are changing constantly. And I also, I have a 13 year old daughter, you know, who is who's going into eighth grade, she'll be in high school soon and then into college. And, you know, we are we are living in an age of experimentation right now because these technologies are coming so fast that the nature of what it means to use it from one year to the next is very different. You know?

Chris:

2023 to 2024 to this year have all been distinctive in terms of how these tools are able to affect our job. So I think the first thing is I would say, of accept it, accept the change. And I see, you know, I see a lot of people who are highly technical developers, things like that, who probably shouldn't know better, kind of resisting and all that. And so I think the first thing is to say, this is happening, and it's not going away. And then the second thing is on on individual roles on individual jobs to try to imagine where things might go, given the tools that you're seeing today.

Chris:

What do you think might change incrementally tomorrow? And what kind of trends are you seeing in your particular industry? And go ahead and get ahead of that curve. And and I think that's really valuable. I think it's hard to do.

Chris:

So it's it's very easy for me to go make a recommendation like that. But seeing the future is not easy and being able to to do that. So I think my recommendation is given that this is this technology is only going to continue to grow and change things, accept that your job is changing, accept that the the the the vision that you had yesterday is not gonna be what happens tomorrow. And so try to track on what you're doing, embrace the tools and figure out what's changing your job. How can you make it a net positive instead of a net negative?

Chris:

And those are very generic, but I think it's a very job specific, you know, set of tasks that you have to do in terms of that. And I think that starts with education. It's one thing to be doing it on the job like we are in the middle of a career. But how do you train today's youth to to do that? They're really good with these technologies.

Chris:

But we're, they're not the ones setting the curriculum. It's us older people that are setting the curriculum. And I don't think we're doing them any favors right now. In school curriculums, they're they're definitely not set up for success yet.

Daniel:

How far behind in general are we, do you think, on that front in terms of, let's just say college grads coming out into the workforce in terms of what maybe, take any field, accounting, marketing, communications, business, whatever it is, how far off do you think we are in terms of the realities of what day to day work might look like with these tools versus how that's being represented in the classroom? Any insights?

Chris:

Yeah, I think look to how people are doing it in a hobby sense, you know, because people who are interested in adopting these technologies, we've seen this for I mean, this is not new. This is for many, many decades. You know, companies tend to adopt what their developers go home at night to their home and play with and are interested in because it's the cool thing to do. And I think you can you could expand that across a lot of different fields and industries. You know, what what do people do when they're doing it on their own time because there's this new thing out there.

Chris:

And if companies were smart, they'd pay attention to that. If universities were smart, they'd pay attention to that and track that fairly early kind of what the maker space or the hobby space is doing and build curriculum around that. Because right now, I mean, as we sit here today, most schools, if students go in, the professors are trying to find ways to keep students from using models that are publicly available to do their homework and stuff. And when I hear and and and every time I hear an academic say something like, do I keep my students from using chat GPT to cheat on their homework? I'm like, well, you're teaching the wrong thing then.

Chris:

Because that that that model is never going away. So if you're teaching something that that's causing you a problem, then you're doing whole curriculum is off track. And I realized that may not be the perf you know, a specific teacher's ability to change overall curriculum, but it's certainly university and school leadership, school system leadership that should be doing that. So there's a big correction that needs to happen in education, that companies would be very smart if they could also track that very early and go ahead and make adjustments earlier rather than later.

Daniel:

Yeah. And maybe part of the struggle here, which I may be just playing devil's advocate a little bit on the Well, there's a lot of I think as everyone would recognize, there's only so fast that education can move just because of bureaucracy and other things. Just putting those things out of the I mean, this is also true probably for companies where you look out around the landscape of AI tools that can affect my job and you look at the snapshot today and then you look at the snapshot next week and it almost seems like it's totally different. There's so many things coming out and I can put in the investment to learn this thing. Then next week it seems totally useless or could seem totally useless.

Daniel:

How does one out there that's maybe listening to this navigate that kind of whirlwind of releases?

Chris:

So so I I don't plant an anchor in any one thing as a long term bet. I think I think to win these days, you have to I think you have to embrace that learning mindset. And it's very cliche to say that. But I think you have to do that in your in your day to day actions. You know, I'm I'm at an age I'm 54 right now closing in on 55.

Chris:

And a lot of my peers in terms of age have already just kind of given up, they've already decided what they think their worldview is, and they've stopped and planted right there. And I I think it's exactly the wrong thing to do. I think the world is changing fast, and to continue to be successful and happy in such a world, you have to roll with the world on a day to day basis. And you may invest in something today based on what you think tomorrow looks like, and you may realize in a few months that that's not the way things are going, and you should be ready to pivot again. And so there is a never ending series of pivots or or adjustments that you need to be making on an ongoing basis for the rest of your life, all the way till the day your dying day.

Chris:

And if you ever quit doing that, you're putting yourself in a worse position. So that learning mindset is a is an action based activity. It's something that you're you're literally doing all the time. And I think people who can do that will tend to be more successful and happier in what they do than people who just resist and resist and plant flags. So that that would be at at 54, that's what I think I've learned.

Daniel:

Well, I I definitely So there's a whole element of this that really has nothing to do with AI, which is also kind of based in neuroscience. I've read a lot about dementia and these sorts of things as sort of that's occurred in my immediate family. And that's one of the things that a lot of neuroscientists talk about is that just act of always trying to learn a new thing is one of the best things that you can do that's tied towards Not that it's a guarantee, but it's definitely positively correlated with good outcomes long term in terms of preventing dementia or other sorts of brain type of conditions. So maybe that's a good sign, regardless, of whether it's AI or not.

Chris:

That's right. A little bit of I mean, there's a mental health benefit to doing it and there's a career benefit. So just love learning and pick lots of different stuff to try out. And some of it's little things like puzzles and some of it is learning a whole new thing that you've never touched before.

Daniel:

Well, Chris, we talked a little bit, I guess, more generally about the impact of AI on the workforce and maybe professional development education. There's an interesting side of this too, which is the actual workforce that is developing the AI things. So AI, I think we've kind of started to adopt this term AI engineer, you know, I think in a lot of, lot a lot of things to, Lighten Space and our friends over there, SWICS and and others from the AI engineer, summit and and those types of things that have put this term out there. But whether it's AI engineer or data scientist or machine learning ops or whatever it is, this sort of workforce that's actually supporting the build out of the AI, both infrastructure and application layer, that is driving a lot of these transformative tools, I think we would all recognize well, maybe our listeners would recognize, but certainly, and I hear this, I know this directly because I talk to our customers who many of which are trying to hire in this sort of talent into their companies, is very difficult right now. There's a lot of positions available.

Daniel:

There's kind of, it almost reminds me a lot of back in the day when people were trying to find the unicorn data scientists, right, that sort of didn't exist, maybe back kind of 2011 to 2015 type of time period. Now it's expanded because the impact of this technology is just so much more, but everybody's looking for AI engineers and talent that is related to this wave of generative AI technologies and having trouble finding those. Yeah, I don't know. Is this also a trend that you're observing?

Chris:

Yeah. I think so. I think and, you know, not only not only is it a trend, but, you know, it kinda points to the the inequality in the larger space in terms of the value in terms of what compensation would be for that, the value in different industries. And so AI engineers and, you know, specifically, you know, research scientists that are developing new model architectures and stuff like that, you can get some awfully big compensation for that. And yet there are many there are many fields out there if you're not able to do those kinds of skills or whatever that are, you know, as we talked about at the beginning of the conversation, which are decreasing and stuff.

Chris:

And I think that's another I think that that's another thing that people really need to consider is, you know, is the track you're on going to be in a place do you do you do you expect your track to be in a good position two, five years out from now based on how you see things today? And and that's another reason. Even even in a late career stage, maybe consider making changes. Maybe maybe maybe it's time to make adjustments and stuff because, yeah, AI engineers are able to basically write their own paychecks in a lot of cases. A lot of other industries are seeing decreasing revenue to the for those particular jobs.

Daniel:

Yeah. One one data point on this, which is kind of interesting is news that came out a little bit earlier in July where Apple's kind one of Apple's top AI folks, Roaming Peng, was poached by Meta. And I don't know how accurate this is. I haven't done the But the articles are saying sort of total compensation package over X number of years of like 200,000,000 or something. It's sort of like, I guess how you could think about this is like these top AI people being swapped around almost like top athletes in a professional sport with big contracts or the best f one drivers going from this team to that team and signing a big contract or whatever.

Daniel:

It almost seems like we're living in that world. Good point. Yeah. I mean

Chris:

And we are.

Daniel:

Yeah. Yeah. I'm very happy to be building what we're building with my team, but it also makes me think in the back of my mind, it's like, woah, we've got some great talent here. I hope no one offers them 200,000,000 because I can't match it. There's a market where you can just I also wonder about retention of some of these folks where you finally find an AI engineer that's working well for you, but they could go out and demand twice as much somewhere else.

Chris:

They could. So there's two points there. One is is this story doesn't really stand alone. There's been a number of of very similar stories with Meta. Meta has been has been hiring

Daniel:

I think Netflix too, and some other companies. Yeah.

Chris:

Yeah. But but Meta in particular has been known the last couple of months for pulling people from OpenAI, Anthropic, Google, Apple now. And that's a group the sports analogy made was right on. They're weaponizing kind of that hiring process to try to get the right talent from and they're gambling a lot of a lot of money that they're going to be able to put together a superstar team that will outpace the market. I think but that's that's also at that extreme high end area that we're seeing that.

Chris:

I think in the short term, I think we're seeing compensation go up for these for these skills as people are because we're still in very early stages of people setting up, you know, truly mature AI infrastructure and stuff. But I think over time, just like we've seen in software development over the decades and other industries and, you know, with technology, Because there's more compensation there, more people will gravitate to those skills. It will you will have a bell curve of capability in terms of the of the candidates that are out there. And generally, compensation will fall over time because they'll become, you know, much more commonly available, because that's where the money is.

Daniel:

And then people will get a lot of money for being AGI engineers?

Chris:

Yeah, you know, that's coming. We're gonna see by by twenty twenty six twenty twenty seven. We're gonna be talking about AI, AGI research engineers and stuff like that. That'll be the new title coming. Folks, hear you heard it here first.

Chris:

Right out of Daniel's mouth.

Daniel:

I'm I'm gonna I'm gonna start a AGI engineer summit and just start the the trend.

Chris:

I mean, you'll notice that that's what that's all Sam Altman and OpenAI talks about all the time. And and so is Mark Zuckerberg with Meta. They're just talking AGI AGI AGI, but no one's doing that. Yeah. Like, they're they may be doing research, but they haven't gotten there yet.

Chris:

But it's great marketing, and it and it's driving the compensation spectrum, but that will that will mature and the AI tools themselves will become better suited for creating infrastructure and setting up stuff just as we were talking about earlier with other areas. So it will we're in a bit of a bubble in that area. It will eventually pop.

Daniel:

One interesting we need to have we had a previous guest, one of my friends from the Intel Ignite program, Ramin on. He he's a he also has a job as a AI engineer, but also teaches at north Northeastern. And I was just talking to him, just a sort of interesting data point. We we need to have him back on the show to talk more about this, but he he was mentioning that he he teaches a couple different courses. One of them is kind of more the sort of theory behind generative AI, more of the, I guess, the transformers architecture, you know, just the bones and underpinnings of all of those things.

Daniel:

And then a different one that's like machine learning ops or like AI ops type of course. And he's clearly seen a shift over the last few times that he's taught this where in the beginning, it was everyone wanted to be in the generative AI theory course. And that's basically flipped to everyone wanting to be in the ops course. And at least his quick reflection on that was that people sort of assume now that, well, you don't really need to know the theoretical underpinnings of any of these things. It's all about how you operationalize the technology and put together all the plumbing and integrate the data and scale it up and that sort of so I'm not saying that's the correct perception necessarily, but I think it represents an interesting shift in how people are thinking about what is an AI engineer or what is an AI position in maybe an AI or not AI company.

Daniel:

There may be a trend, and that's only one data point, so I can't speak more generally, but there may be a trend where people Like when we say AI engineer, kind of like when we said data scientists, there's sort of an assumption that you kind of know statistics at some level, you know kind of some of the underpinnings of these models, maybe how to do evaluations, this sort of thing. The more technical kind of underpinnings of that where maybe some data scientists were just really good at using gradient boosting machines and using all the tools. So there may be this kind of interesting dynamic now where there is such good tooling around AI that it's almost like there's this engineering of the tools, not engineering of the AI that people are interested in. So not the actual model or the underpinnings or to use it or the theoretical understanding, but in a sense, how to engineer at this more abstract layer and connect up all the plumbing and ops around it is very interesting.

Chris:

Yeah. I I think I I and I can understand that. And there is a bell curve of skill and capability that people will bring to such positions. I think I would encourage people to do both. Don't do one or the other.

Chris:

Do both.

Daniel:

Take both of Ramin's courses, please.

Chris:

There you go. Make you better. Never stop learning. So yeah, that's when I hear people skipping over all the theoretical. I get that you might have a practical intent in mind, but there's a point where a certain amount of theoretical knowledge helps you do a practical job better than you could otherwise do it.

Chris:

So yeah, yeah.

Daniel:

Yeah, I I always like this idea of having a mental model or having at least intuition around how these things work under the hood. We've talked multiple times on this show, just understanding, I think there's a lot of people who use these tools that don't even, They don't understand that when text is streamed onto a screen, or you're getting a streamed response back from an LLM, that is actually every token. It's an operation of the model which produces that output and then it cycles around and you run the model again. And that actually has a number of implications that help you understand a lot of things. Well, that makes sense why closed model providers charge by the token output because that's connected to compute, right?

Daniel:

Or if you host your own model, it means that throughput or streaming or the time that you need for a response, it is tied to how much text you're outputting and that has implications for how you quote AI engineer your solution and put it together, especially if you're concerned about latency or user experience or other things. So there's just so many like trickle down things, even just from that one kind of example of how mechanically a model works. I

Chris:

think you're point on, a spot on right there. So think that was well said.

Daniel:

Yeah, well, I guess we've talked about workforce, we've talked about education, we've talked about hiring, how jobs are shifting. As we close out here, how do you feel about your own position and what are you thinking about, Chris, as you go to the next phase of of what you're involved with?

Chris:

So I I am trying very hard to follow the my advice from a few minutes ago is I try not to let a single day go by where I don't do a little bit of self analysis, a little bit of introspection, look at what I'm being asked to do and how I might, change that. That almost every month, I make a little insight into what I need to do to adjust. A Gentic AI here in 2025 is changing, how I'm putting solutions together, and how I did it in 2024. And so and it will be different again in 2026, and I already know that. And so I think that that never you can't ever stop evaluating what your present and future look like and never put a stake in the ground on how you're doing things that might hold you up tomorrow.

Chris:

And so I think that's I think that that can I think that's general enough to say that that could be applied across almost any industry? And so don't find yourself stuck in the mud. Don't be don't be the old codger that I am always trying to keep from being myself. Stay young in your mind and stay agile in your thinking. That's what I would think.

Daniel:

That's awesome. I think that's a great way to end this discussion. Thanks for thanks for digging in with me, Chris. Enjoy the evening and whatever you're gonna learn tonight.

Chris:

Sounds good. You too, Daniel. Take care.

Jerod:

Alright. That's our show for this week. If you haven't checked out our website, head to practicalai.fm, and be sure to connect with us on LinkedIn, X, or blue sky. You'll see us posting insights related to the latest AI developments, and we would love for you to join the conversation. Thanks to our partner Prediction Guard for providing operational support for the show.

Jerod:

Check them out at predictionguard.com. Also, thanks to Breakmaster Cylinder for the beats and to you for listening. That's all for now, but you'll hear from us again next week.

Workforce dynamics in an AI-assisted world
Broadcast by