Delta CX Hive

Ep 295: The First Wave of The AI Bubble Bursting / The AI Squeeze / The Levitt Loop

Debbie Levitt of Delta CX Season 1 Episode 295

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 1:04:51

Lots to cover today in the worlds of #AI #layoffs #startups and more! We're examining how the first wave of the AI bubble bursting (according to me) will affect companies trying to build AI into their products. Learn about what I'm calling the AI Squeeze. 

Then we'll look at my new economic theory that I'm calling The Levitt Loop. It's actually two loops. One starts with layoffs and causes more layoffs. The other forces consumers to shop more by quality, which should force companies to improve quality and compete based on it... but will it?

"The First Wave of the AI Bubble Bursting: My Predictions" https://medium.com/r-before-d/the-first-wave-of-the-ai-bubble-bursting-my-predictions-a414259ed13d

"The Levitt Loop: AI, Consumerism, and Quality" https://medium.com/r-before-d/the-levitt-loop-ai-consumerism-and-quality-814e8f374570

You can now buy my new book in most formats. https://AtomicPMF.com  That pages also links to my course on product-market fit and using AI thoughtfully in research! 

Support the show

Join our free community. All links at https://dcx.to

All episodes are marked "explicit" since sometimes we use swear words. 

SPEAKER_00

Welcome, low ego action heroes and phoenixes. I'm Debbie Levitch from Delta CX, and welcome to episode 295. We're going to be talking about some related combination topics today. We're going to be talking about the first wave of the AI bubble bursting, which I think is coming soon, if not already happening. Also, we're going to talk about what I'm calling the AI squeeze and what I'm calling the Levit Loop. I actually named something after myself this time instead of Delta C X. But hopefully these are concepts that you will find interesting, curious, you know, uh make them your own, add to them, take away from them, whatever. I've got two medium articles we're going to read through, and I would love to take your questions, agreements, disagreements, different perspectives, uh discussion topics, and uh we'll we'll go until we're done with the articles and the questions. Um, just a reminder that uh everything you could possibly want to join is at dcx.to. That's just our page of links. So it's got uh Twitch, free Patreon, events and courses. I've got lots of courses coming up. They are various prices, and I'm happy to give you a special coupon if you need an extra discount. Uh, all of the coaching work that I do, both helping people in their jobs and the life coaching side, and our free Discord, which has over 700 members as of uh here we are in April 2026. So, with that plug out of the way, let me share my screen and let's start with the uh first article that I wanted to read, and it's called The First Wave of the AI Bubble Bursting, My Predictions. And uh here we go. It's uh from March 2026, and it says, We hear a lot about the AI bubble and how it's going to burst. I think there will be two waves of bursting. The second will be companies who realize that the investments aren't sustainable compared to the outputs and outcomes. They'll have to adjust spending and investment and balance AI work with human work. Not human in the loop as if we're an afterthought or nice to have. Real balance, where we still do the work and AI is a tool that augments us when correctly selected. That's the second wave of the AI bubble bursting, according to me. Let's talk about the first wave of the AI bubble bursting, which I also covered in chapter 11, Competitors Move Into Your Space, in my new book, Atomic Product Market Fit. We do X, but with AI. The first wave of AI bursts that I predict we'll see will be that all of the product, services, and startups offering X but built with AI will burst. If the main product, services, and experiences the company offers are layering AI onto product or service X or using AI for some aspect of product or service X, why spend money on that when you can pay for the AI or LLM tool itself, which can do all of these functions? For example, a startup that takes HR system data and gives you reports and info via AI, a startup that helps lawyers review and collaborate on contracts using AI. A startup that uses AI to help you analyze, synthesize, and report on qualitative research. On the surface, you might think, yeah, there are real markets for these. These companies will find product market fit because there will be real demand for what they do. But we aren't considering two possibilities. If the system that provides the data ever adds these AI capabilities, nobody will need your startup. If the HR system creates better data analysis or reports, your whole startup might crash. Fast loss of product market fit. Two, especially with a paid LLM account, I prefer Claude. I don't need to pay for that HR startup, legal startup, or research data startup. I can put my data into Claude and ask it to create exactly what I need done with that data. Using my own Claude account, I'll have more flexibility over what I can do with my data. I'll have more control over that data. The more companies that see my data, the more of I've exposed my data to all of the I don't know what's in terms and conditions. This means that we're X, but with AI, startups get squeezed from both directions. This is the AI squeeze. Data sources or integration partners create their own features, satisfying the gap the startup addressed, and users handy with their favorite LLM will move the work into that. So I'll finish the article and then I'll go back and talk a little more about the AI squeeze. But won't somebody still want what these startups offer? If you threw money at Startup X because your HR system didn't do what you needed with your data, and suddenly your HR system now offers that, or you figure out how to do it in your jobs-approved LLM, you don't need that startup. It's a lot of friction to export something from one tool and bring it into another. If you're doing that anyway, why not bring it into your preferred LLM? But if you no longer have to do that because the HR system built the reporting you needed or built in AI to allow you to create custom reports, then the startup is solving a problem people no longer have. My own examples. I stopped using QuickBooks to use Zoho books, but only for last year. This year I did my taxes by exporting from all of my bank and credit sources and having Claude categorize and total everything for me. No special coding, just a chat. 2. Last year I dumped Replit when I realized I could make what I wanted in Claude. And not even in Claude Code. I'm not a dev. And not even Claude Opus. I did it in Claude Sonnet 4.x. I built my own private Android app to replace Google Voice so I can run my own VoIP numbers through Twilio. I manually FTP'd the code, I ran Putty, and I built it in Android Studio, all coached by Claude since I only knew FTP beforehand. 3. Last year I dumped Upheal and then Fathom when I realized that I could get better meeting summary notes from Claude. Now I have Fireflies transcribe the meeting. I give Claude the transcript and run my usual prompt to get notes in the style that I prefer. I could go cheaper and use the build-in transcription for Google Meet calls if I wanted to, but I'm happy with Fireflies' accuracy and features, which currently offer me more than Google Meets AI features offer. Zoho Books does enough that it's unlikely to burst with an AI bubble. But for people like me using it for simple bookkeeping that'll move totals into an annual tax return, the value might not be there. Replit has Claude as one of the models you can use. Do we still need Replit when we can work directly in Claude? Miro uses Claude, but Miro has other value for me outside of AI prompts. While working on our AI transformation, we'll have to look closely at when we want to use the AI or LLM more directly and when it makes sense to license other tools. Claude is becoming a platform, and it might be the only one you need. Many roads lead to Claude. With chat-based models, artifact creation, the API, Claude code, and other tools, it's truly becoming an ecosystem. It's a smart move. Learn to use Claude decently, and there are so many things you can replace. Maybe someone else wins or ties Claude, but from what I'm reading, Claude is the closest to having a true ecosystem that's ready for veteran experts and newbies to use efficiently. So now what? As chapter 11 of my book reminds you, you'll need to closely examine who your target audience is, what they have, what they need, room for improvement in their tasks, and where friction lives now. There is no product market fit or product and service market fit without fitting products and services to your market. If your market can take care of these tasks without you, these might not be problems and therefore nothing to solve. Invest in fresh and excellent research to learn what your target audience needs and where are the opportunities to deliver quality and value to them. In the near and distant future, it's unlikely to be we do X, but with AI. So that was the first article I wanted to read, everybody. That was my prediction about the first wave of the AI bubble bursting. All of these startups rushing to create, and sometimes larger companies, but mostly startups rushing to create something that implements AI, and usually the AI tools you're already using. Many tools are you're using ChatGPT through the API or Claude through the API. If you're a paid chat GPT or Claude user, do you still need that other tool? I think it's an important question. And that's where the AI squeeze comes in. That's what I'm calling it. So you have these startups or tools or systems in the middle that are going to offer you something with some sort of AI functions or layer. And then there that's being squeezed in two ways. One, you're being squeezed by the people who are like, Why can't I just put that into Claude? Um, I've got a Claude account, I could just do that there. And then you've got the people who have even more savvy who might say, Why don't I just vibe code my own version of this? Why would I need that? And that's especially true for the startups that are vibe coded. If you can vibe code it, so can somebody else. And so that is where I think a lot of these um vibe coded uh businesses and new startups and uh companies like that are really going to feel a squeeze from people saying, Well, we we don't need that, we'll just use Claude, or we don't need that. The other tool we were paying now does this, so you don't have a problem we can solve. Or, oh, that's vibe coded, then I can just vibe code that myself. And so it's it's I think it's going to really shift the way people will have to think about the products and services that they offer, and in that sense, their product market fit. Because if your market no longer has that problem, what are you solving? Anyway, before we get to the next article, which kind of goes a little further with some of my predictions and thoughts, these are all predictions. Any um thoughts or questions or disagreements or anything people want me to speak to about my prediction for the bubble bursting? I should always say send up an emoji if you're typing so I know to wait for you, but uh but I'm not looking at LinkedIn uh emojis. I'm typing. Okay, thanks KB. We'll wait for KB's question, and um uh while we wait for KB's question, just a reminder that tomorrow's Wednesday, so I will be uh live at uh 7 o'clock in the evening, Italy time, doing Ask Me Anything, taking any questions that you have in uh anywhere on life's and works paths. So uh if you want, you can go to dcx.to and send in your questions early so that you can be uh among the first to get answers. Uh that is tomorrow. Let me see what else is also coming up. Ah yes, next Tuesday we're doing the I'm a culture fit uh article, so join for that as well. KB says, yeah, there's no moat if everybody does what you're attempting. In the short term, I can see having an offering that is like a suite of tools that would make it easier for people to find the tools faster. The hard thing for me is finding the right tools. I'm getting ad placements as recommendations that don't fit my needs. Yeah, I think that's interesting. I wonder if that already exists. I haven't even looked for it. Some sort of database that says, hey, here's like all the AI tools out there, and you can search for the type of thing you're trying to do. Uh, but then again, I could also go into Claude and say, can you search the web for an AI tool that does blah, blah, blah? But knowing me, I would just ask Claude, can we build this? You know, there's some things I'm not going to try to build myself, like Monday.com. I'm not going to try to recreate Monday.com. I'm not going to try to recreate PowerPoint. But there's a lot of stuff that I probably could, depending upon how I wanted to spend my time. So yeah, it's it's interesting. And especially since you're saying in the short term, since I think some of these are short-term solutions, but then when you have that bubble bursting, will we still need the short or the long-term solutions? So, again, really good questions, good thoughts, good critical thinking, and we don't know the answer. We can just make a prediction, see what happens. Happy to be wrong. Um, so I'm gonna shift to the second uh KB says, yeah, I usually have to search my need plus Reddit. In my experience, I'm getting really strange recommendations if I just rely on Claud or Chat GPT. That makes sense too. Um, so the next article I want to read came, uh I published it around the same time in March uh 2026, and I call the article The Leb It Loop AI, Consumerism and Quality. So it's another kind of model that I've half invented and also kind of a prediction as well. And I'm curious to hear what people think about this one. Um, so it says, I'd like to put a name on something I've been thinking about for years, and what better name than my own? The Levit Loop. As automation and AI reduce the workforce across industries, the consumer base that all products and services, including the software being built with AI, depend on for revenue shrinks. So that's the consumer base is shrinking, fewer people trying to buy your stuff. Um, this will change consumer behavior toward quality and value as they decide more carefully how to spend the less money they have. So think of it like this trigger the core a corporate uh corporate or government workers are laid off, or voluntary workforce reductions and early retirement occur, or job creation is greatly cut. Effect number one, B2B and B2G market cannibalization. When there's a noticeable reduction in humans who work in corporations or governments, there are fewer people to buy or license what you offer. Amazon cuts 30,000 people. That's now 30,000 fewer licenses they might need for Mac machines or Microsoft Office or Miro or things like that. Figma announces AI tools that make companies think they don't need designers. Designers are laid off. Fewer people now need to license Figma. So loop number one, cascade effects. These companies might then downsize even more, further reducing the need for B2B and B2G tools, SaaS, etc. Whoa, we're selling way fewer SaaS and other licenses, budgets are down, fewer workers means fewer people need what we offer. AI agents don't need Word, Figma, Jira, Asana, Slack, etc. Sales is having a harder time and selling less. Looks like we'll need to cut more human workers to save money. Effect number two. I've got a diagram for this, so sorry to the people listening to the uh audio only podcast, but I'm gonna go through this and then show the diagram. Effect number two, B2C market cannibalization and reduction. B2C markets shrink as layoffs and the declining job market leave people with less spending power. Amazon cuts 30,000 people. If they have trouble finding a new job, those 30,000 people paying the same or at all, that's tens of thousands of people with less available money for whatever you sell or offer. 124,000 people were laid off in 2025 as per layoffs.fyi, and they're likely to have less money to spend in 2026, potentially pushing them to stop doing business with multiple products, services, and companies to find cheaper or better alternatives. Loop number two, increasing consumer standards forces an increase in quality and value. When most of today's tech jobs are done by AI or with AI, it'll lead to more digital products and software being released faster and by more companies. Consumers will have more choice from large companies to startups that'll suddenly appear. And with less disposable income and spending power, they'll have higher standards. They'll select the company with the best products and services for the price. Your competitive advantage won't be how many AI servers you have or which model they run. It'll be how much quality and value you deliver to a shrinking audience with higher standards. Loop number two should, and I hope will, force companies to produce higher quality. You won't compete on speed, first to market, or vanity metrics. Everybody will be fast. Everybody will use AI, and maybe even in the same AI tools, to create their products and services. How will you differentiate? What's your value proposition? You'll have to actually provide what people need, solve their problems well, and doesn't create new problems. And when AI agents build easy migration tools to win the competition, the old friction full cancellation or migration won't hurt so much. You won't be able to keep people by making it hard to leave. Every company will compete not only against the big companies they're used to, but also new startups that can spring up in the blink of an eye. But more importantly, you now compete against any half-tech savvy person with a paid clot account. Automation and AI shifted the job market to being an employer's market. They hold all the cards. They can keep cutting and reducing salaries and perks. But the loop shifts consumerism from people are stuck with increasingly low quality, unethical corporate decisions, and enchitified experiences to consumers will have more choice from a wider marketplace of vendors delivering. More and faster. Consumers will be able to choose who is truly the best at delivering value for every unit of money spent. If you care about product market fit, then the product can't stay the same when the market shifts. We're just at the beginning of that shift. I'm probably way ahead of the curve, as usual. So now I've got on the screen my diagram for the Levit loop, and it's basically two loops with something that ties them together in the middle. So I'll read it out loud. The first loop says, uh, it's got kind of a circle like a cycle, and it says human workforce andor wages greatly reduced. That leads to B2B and B2G cannibalization, and that leads to the target market shrinks and it's harder to make sales. So guess what? They lay off more people. So we cycle around again, human workforce andor wages greatly reduced. But then from the human workforce andor wages greatly reduced in the diagram, that is what brings us into the second loop. And that piece says consumers have less spending power, but more vendor choices. So the second loop, which is our cycle, says consumers demand more quality and value. So remember, if they've got less money to spend, they're gonna want the thing that does more, lasts longer, works better. That's gonna lead to vendors forced to compete on quality and value, not speed, because everybody will be fast. Right? Everybody, every company right now is struggling to be the fastest out there. Great. When everybody's fast, how do you stand out? How do you compete? And so the loop shows that if vendors are forced to compete on quality and value and not speed, that should lead to consumer standards rising even more. As we're getting better stuff, we're gonna go, okay, well, now I need the thing that's even better. And that cycles back around. Consumers demand more quality and more value. Companies have to meet or exceed those expectations or or risk uh not being selected and losing that business. So what I'm calling the Levit Loop is two loops with kind of a bridge between them. Um oh, Scotch and Glory says, I think the only people that will be able to afford quality will be wealthy people. Cheap AI slop will be for the masses, so companies will keep pushing it out. I've I see that, and I I I am not sure. I'm I'm not totally convinced of that, but again, we're all just making predictions and anything can happen, and maybe we start there and end up going into a more quality world, or maybe we start with some quality and we end up where Scotch and Glory says. I mean, these are just all ideas and predictions. We don't we don't know. But um I I I mean, there's there's always a strata of things that only the wealthy can afford. But I think if you want to keep selling to the people with less money, you've got to make the thing that they want to buy. I don't know, it'll be interesting to see. Anna Lucia says, I think many companies will go out of business because they won't be able to compete with the big players, and the availability of AI may be reduced soon if they change the pricing, as we were talking about in our Discord community. Yeah, with a lot of uh companies either changing their pricing or keeping their pricing the same and giving starting to give people less access and more limitations, uh, that's changing the landscape as well. So getting back to the article to talk about uh uh the loops, because someone on LinkedIn was acting like, well, you're not saying anything special, that's just Keynesian economics. Yes and no. So the loop starts with something a bit Keynesian, but then shifts and expands on that in modern times. So Keynes essentially said that an economy runs on spending. When workers lose income, they spend less. When enough workers spend less, demand drops across the economy, which causes more job losses, which further reduces demand. It's a deflationary spiral that feeds on itself. So the the I've got a variation of that in my first loop. But then Keynes said that demand can be reignited through government spending. Governments create projects like improving infrastructure and they hire mountains of people. And the idea, Keynes's economic theories, or however you want to call them, then say, well, consumers will have more money to spend once again when they get hired into these government jobs to improve infrastructure, because that's what happened in parts of the 1900s in America. Sure, that can create jobs and it can give people more money to spend. But in 2026, many governments look less interested in spending money on what would help economies, citizens, and societies. We can't wait for governments to save us, especially if the government partially created the problem. I predict that when companies notice they compete on quality and no longer compete on where fast because everybody is fast, this will lead to a small hiring bump, but not a resurgence or a renaissance. When companies realize that they compete mostly on quality, value, and service, and they realize how that all starts with better understanding your experience ecosystem and what I like to call all of their dimensions. Some people will be rehired and paid decently. But I'm not predicting that a reinvestment in quality will lead to wild rehiring of humans. It will lead to mild appreciation for certain human roles that drive true strategy quality and outcomes. I'm thinking service design and human researchers who can research human audiences, the jobs that are the most upstream in the process. They provide the excellent data that feeds product and service strategy and development. These loops should feed hiring, but in the AI age, we also run the risk that less rehiring happens, and companies might just ask AI to guess why customers are unhappy and fix it. Won't AI building software make it faster, cheaper, and better? Well, how's that going so far? In 2026, AI is costing zillions more than it's making for any company, from data centers to the environment to the cost of energy and more. But let's say that AI and automation that replace humans make creating products and services faster, cheaper, and maybe better. Cool. Who will be left to buy the product, services, or subscriptions? Which workers will need what you sell? Who won't be able to afford this anymore? Which companies will use AI to build a replacement for what your company offers so they can stop paying you? What's your strategy to handle the Lebit Loop and come out a winner? Quality matters now, but will matter more. Additionally, using more AI to create more digital products and services in less time than before doesn't mean they're high quality or desirable to your target audiences. It just means that AI will make more faster. Without understanding audiences, their tasks and needs, and then deliberately designing to solve those without creating new problems. We might more quickly create cycles of garbage, which I think is what Scotch and Glory is saying. Especially as job market salaries and spending abilities erode, people will have less money to spend on anything. This will, it should, force companies to have to go back to competing on quality. People won't throw their money at anything and everything. They will be more aware of their needs and standards, and they will be more likely to shift to products and services that meet or exceed those at the right price. As we use more factory floor robots to manufacture objects, we can't reduce the quality. Consumers still expect high quality or at least value. Something is worth at least what they paid. For many years, the trend in digital products and services has been to deliver crappy, rushed, broken, maybe we'll fix it later stuff that we know now is garbage. We knew we were delivering something customers might not love or even need, but hey, we wanted to meet those deadlines and be fast. Once fast is solved, which company is the winner? I hope it's the company delivering quality and value. You won't be able to keep in shittifying forever. That'll be a luxury of the past. It'll be too easy for consumers, users, and customers of all types to downgrade or cancel and move to something they think is better. Better might mean better features and services. That's our Mazo Atomic Product Market Fit. Or better digital interactions, human interactions, trust, pricing, and other touch points. Our micro level atomic product market fit. We'll want the thing that doesn't break right away or in a year. We'll want the company that never causes us problems. And if they do, they fix it quickly and accurately. Standards will go up. Companies that want to compete will have to rise to meet or exceed standards, and standards will keep going up. While jobs and spending power spiral down, standards and being pickier about what we spend money on will spiral up. And I'm not disagreeing with Scotch and Glory. Companies will still try to give us the most garbage they can for whatever prices. But as we find we have less money to spend, I believe we're going to be pickier about what we buy, and companies will have to rise to that challenge or face the risk that we don't want to do business with them. And this is just simple economics. During economic downturns, consumers become more value conscious and quality focused. They buy less, but they buy differently. They're more likely to fix something than replace it. I just fixed my computer for 30 euros. I want to get years out of that thing. We might see less brand loyalty as consumers choose whoever satisfies or exceeds their quality and value standards. So take action now. What can your team or company do about any of this? If you're not ready to invest in delivering more quality and value to a customer base that will increasingly care about those, then at least start strategizing around that. What will you do when your customers downgrade or cancel because you no longer fit their needs at all the atomic levels? What will be your differentiators when every company uses AI, every company is fast, and every company can innovate or copy the innovator or disruptor in what might be days? Check out Atomic Product Market Fit, my new book, and refocus on how your products and services can fit your market. Sorry, the Levit Loop didn't make it in the book, The Curse of Having to Eventually Publish. Before I continue the article, let's look at some of the comments. KB says, as the speed to build gets faster, it makes me think people will naturally become more reckless in risk taking. My guess is you're going to have higher quality offerings simultaneously, with insane system-breaking stuff happening. Oh, I definitely agree. Which then makes me wonder what'll happen faster: innovation or degradation. Yeah, or possibly both. And then the obvious solution from this problem will be to give more authority to AI to manage these decisions about what goes live and what doesn't. Scotch and Glory says, I also wonder if anthropic will end up replacing Google's 20-year tech dominance. I see that too. I I I think that's extremely possible. Anna Lucia says, It seems the EU will force companies to have replaceable batteries for phones. Yeah, because again, when people have less money, um, they're more careful about what they spend money on, and they'd rather fix something than replace it. I remember seeing this in 2008 when a lot of my clients were eBay sellers, and everybody who had something that was disposable income-based or luxurious went out of business. And all my clients who had stuff for fixing stuff did amazingly. And I think we're going, I think we're headed back there. Anna Lucia says, give me a new battery for my phone and I won't have to replace it anytime soon. Yeah, on my last phone, the um charging jack started going, so I went to a fix-it shop and they replaced it for 40 euros. So yeah, if we can fix these things and make them last longer, we will. So side note, Jevon's paradox. I've seen people mentioning this on LinkedIn a little bit, and I wanted to bring this in as well because this is another uh economic theory being thrown around right now. William Stanley Jevons in 1865 observed that as steam engines became more fuel efficient, Britain didn't consume less coal as you would expect. It consumed dramatically more because the more efficient use of coal made coal-powered industry viable at scales previously impossible. Efficiency created so much new demand that total consumption went up, not down. We sometimes hear that applied to AI and jobs. Software will be cheaper and faster to build via AI, leading to software demand will increase or will start creating more software and shipping features, and that'll lead to we'll need more developers. Jobs will come back or be created. I'm seeing this on LinkedIn. Are any of you seeing this on LinkedIn? I mostly debunked to this already through the lens of UX research and design jobs. Tech jobs are unlikely to have a renaissance where it's a candidate's market and salaries go up and you've no trouble finding work. I also wrote about tech roles hoping to be the last person with a job. You can find all of these medium articles on our medium publication rbeford.com. Coal efficiency made businesses want to use coal. AI use and its ever-promised efficiency make businesses want to use more AI. Jevins assumed the efficiency gain still required humans to scale, and increased coal demand needed miners, engineers, and factory workers, at least around the year 1870. But AI building more software doesn't proportionally need more humans to scale it. You might net lose workers as AI does more work and people are mostly there to check or fix the AI's work. Coal might be a more relevant analog than we think. Nowadays, with mine automation, needing more coal could open some human jobs, but it would increase the use of machines in mining. Humans might not get as much of the work as we imagine. And that is the Levit Loop article on a variety of economic things I keep seeing popping up on LinkedIn and my own take on them. I don't think Jevon's paradox is going to um bring back jobs. I think it's just gonna make people want to use AI more. I just realized I'm not centered. So yeah, any thoughts? Um let me know if you're typing so I know to wait for you. But yeah, uh, anybody have any thoughts, disagreements, uh different predictions? You know, these are just ideas and predictions. I could be completely wrong, and that would be fine. You know, what do I know? So KB says, like you said, every layoff that happens due to AI is a potential customer lost on the other end of things. So where does the demand come from for all of these offerings? Right, it's a great question. Uh, the more people are laid off, the fewer that they're gonna need computers from their job and licenses for Microsoft Office or Miro or Jira or all of these uh business tools and SaaS systems. And then what are those companies going to do? They're gonna end up raising prices, laying people off. It's all standard economic theory. This is I didn't make up this part. I don't, I can't take any credit for that. This is all standard, obvious stuff. And it's questions I see a lot of people asking on LinkedIn. The downside is that a lot of people end up responding that the obvious solution here is universal basic income, where all the people who aren't working, the government is just going to pay you to exist and and to live. And I think that sounds nice, and I'm not against that idea, but I can't think of a government that would do that right now. Because if there were a government that would do that, we've been hearing about universal basic income for what feels like decades, then where is it? It keeps getting promised as it's coming, it's gotta come, that's gotta be the solution. No, it doesn't. Does anybody think America is going to pay money to over 300 million adults every month and year just to let them live? I I don't I don't see that happening. It would be nice, I'm for it, but I don't see it happening. Anna Lucia says, sounds nice, but it'll never happen. Where does the money come from? From the AI companies, from billionaires? KB says agreed, it's a slow siphoning of resources and an empty promise. Yeah, I mean, if anybody should be paying universal basic income, you would think it would be open AI and anthropic and the companies that essentially took everybody's jobs, the robot factories uh that took the physical jobs. I mean, they would have the money, hypothetically, to pay people to exist. But yeah, I I feel like we're running very fast at a very brick wall. KB says also if the solution is to get mass adoption of AI, you're better off providing tools that are reckless. Maybe that's a bit conspiratorial, but it's an angle to consider. Yeah, it's interesting. I I think you're saying, and maybe you can clarify, that if you have an AI tool that isn't great, it keeps people in jobs, like we're seeing right now in 2026, where some companies have had to, oh, hi Darren, thanks for saying hi. Uh, some hope you're well. Um, am I supposed to call you doctor? I think I saw something on LinkedIn. That did you get the doctorate? If so, congratulations. Um yeah, there there are people I've seen writing uh about uh poisoning the AI, poisoning the training, um signing up to these jobs that claim to train AI and they're gonna train it badly. Uh, we're we're definitely gonna see, I think we're gonna see some sabotage uh in the coming year or so. I've also noticed that a lot of the companies trying to find people to train AI started out at$20 an hour, which is not great pay, especially to sell out your industry and end your own job and other people's jobs. Um, but and now I've noticed that going up. I then I saw it was$40 an hour to train AI, then it was 60, then it was 80. Now I'm seeing 100, 110, 120. Obviously, they're having trouble finding people who are really good at something who want to have AI trained on them and essentially sell out themselves and other people. People who do their work so that they can make some money. We've talked about that before on this channel, because some random uh messaged me on LinkedIn and was trying to get me to sign up to some train the AI company, and I was like, There's no way I was I would sell out my work and people in my profession for any money, let alone a small amount of money. And the person wrote back and said, But I've made five thousand dollars so far, and I said, That's you're exceedingly underpaid. I mean, five you're you've accepted five thousand dollars to sell out your own job? Oh no, there's a bug in my water. Boo. It's that time of year. No, honey, that's doesn't mean you have to get me a cup of water. No. I got I've got this bottle. I will adopt it. Can drink out of the bottle and we can just recycle it. It's that time of year. Everybody knows that time of year where I bring out my fly swatter because they're just flying all around my lights. Um remind me to not drink the cup. Fresh cup. Um, LinkedIn user, which it might be Darren, says, Did you see or already address how the surviving major bootcamp company is starting an AI education marketing campaign, chock full of the same old empty promises? Um no, I didn't see that, but I am continually seeing the crap, especially on LinkedIn, where people are saying, you better learn AI or you're gonna fall behind. And I keep going, learn AI what? Like what? Do you want me to build my own AI from scratch? I can't compete against anthropic. Do you want me to become a developer? Well, anthropic is the developer now. Claude Code can code better than I can. Do you want me to learn how to talk to AI? No, I've been doing that for years. I'm pretty damn good at it. Do you want me to write Claude Skills? I already did. You can now buy my first Claude Skill on my uh Delta CX.academy website. Go go go buy my Claude Skill. Um, I wonder if that's on the DCX.to site. I gotta make sure that's linked. Um got my first skill. It's got it's still requires humans. It does not replace you, it's just going to help you do parts of your job faster while maintaining quality. I'm working on my second Claude skill. So this whole you gotta learn AI doesn't even make sense. And I keep asking people, what about AI do you think people need to learn? And there magically seems to be no answer. Nobody seems to know. We can't all become developers, we can't all become data scientists, especially while Claude makes a pretty darn good developer. Uh not perfect, still needs humans, but getting there. Every year it's getting better and better. Claude makes a decent junior developer. Claude makes a decent junior data scientist. What do you why would you go out and say I'm gonna learn AI? What does that even mean? Somebody in the comments tell me what does it mean to say I'm gonna learn AI? I don't even know. It is Darren. I'm just logging into LinkedIn on my phone. Oh, PhD! Congrats! Oh, congrats, Darren. Um, yeah, so again, it's still all that that money grab. The boot camps are always going for the money grab. They can't seem to win you over by telling you to get into UX or product management or coding or all of these things that AI is starting to cut into. Um, a few weeks ago, we did a show right here, and I talked about how I was experimenting with Claude Chat to make prototypes. And um and now we have Claude Design, but I was doing it weeks ago in Claude Chat before Claude Design came out. Uh, Darren says, currently going through an agentic AI intensive through Harvard Data Science. Kate wait, can't wait to dive into your AI resources. Thank you. Please do. Um, please check out my um my new Claude skill. Um, I think it is Groovy, and I'm working on another one. Um, but yeah, you uh agentic AI intensive. I'd love to know even what that's teaching. People keep saying, like, oh, AI agents, 10x yourself. And then I go to Claude and I go, what would I use an AI agent for? And Claude's like, are you using Claude code? And I say, No. And it goes, well, then I'm not sure. You know, are are there repeated things you need to run and you want to install Claude co-work on your computer? I said, I don't want Claude co-work on my computer. I don't want you in my browser, I don't want you on my computer. And Claude's like, Yeah, I agree. I probably shouldn't be in either of those places. Okay, thanks. So yeah, uh, I mean, and people say, Well, you look at what you can automate. Yeah, I've been automating crap for years. Who had macro express on Windows like 20 years ago? I mean, that was that was the original automation. Who remembers macro express? Um, I've been using Zapier for many years, paid customer of Zapier. I have automations that would make you fall out of your chair. I have stuff that happens while I sleep, but I don't need I don't want Claude touching my stuff. And um uh Michael says, it sometimes feels like the people that generically say you need to learn AI are the ones that are not sure how to use it. Yeah, I I that is such a fantastic point. Thank you so much for saying that. Because yeah, laugh. Um, because I remember I was um when I was poking around LinkedIn a few weeks ago, there was some guy with like 700,000 followers telling you he's gonna teach you AI, whatever that means. And he had posted something about how he's gonna teach you these clawed things, and one of his comments was um, why it's really good to use. And I took a screenshot of it, and I don't remember what it he said. Um, I would have to go look up my screenshot, but he was saying why it's so good to use, and he made up a name for a clawed feature that has a name, and I was like, Claude, help me here. Uh uh keep me from from jumping out the window. The guy with 700,000 followers on LinkedIn who's gonna teach you AI doesn't know the name of the feature, made up his own name, and I can't get people to sign up for my course. Like, what surreal charlatan-based world am I in? And how long do I have to stay? Michael says, BAMO, ha, remember canned response programs, a few keystrokes, and bam o you got a paragraph. KB says, I still uh that's on Lucia. KB says, I think they mean learn to prompt, but everything is going towards natural language processing. So I guess they mean learn to communicate in general, develop critical thinking, doubt it, become a better person. It'll all tie back to something spiritual, and that'll be the selling point. That's my litmus test for how deep rabbit hole we are. Yeah, um remember a few years ago when everyone said become a prompt engineer, and everyone didn't know what the hell to do with themselves, and some people ran out and got certificates. Now you don't even need prompt engineering. I remember a few years ago, people were on LinkedIn going, here's how to work with Chat GPT. Tell it, pretend you are a very experienced professor and you know everything about this thing, and you will write in this style and I was like, What are these people doing? I would just go to Claude and go, hey, can you help me with this thing? And Claude would be like, Yeah, I'm so on top of that. How's this? And I go, Yeah, let's adjust it a little bit. And Claude goes, Yeah, how about this? And then I go back to LinkedIn and people are going, you must learn these prompts. Prompt for the style, prompt for the mood. And I'm like, has no one seen Claude? And then I learned later not a lot of people had seen Claude. So um Anna Lucia says, I still remember when ChatGPT came out, and I had a person like one leap week later saying they were AI experts. No, they had nothing to do with it and offering services and courses about it. Yeah, we even we've even talked about that on this channel. With um, we had some, I think it was last June, I did a live stream. So June 2025, I did a live stream based that I'd been writing for two months. I was writing this presentation, it was about there will not be AI will not lead to a UX resurgence or renaissance, which Jared Spool and other people were claiming. Yeah, prompt engineering hasn't aged. Well, Jared Spool and others were claiming last year that AI was going to cause it a UX resurgence and renaissance because they sell hope. And but then it had also come out all over LinkedIn that Jared Spool didn't use AI and he didn't like it, he didn't use it, he didn't see much use for it, and that he claimed we've already seen its best Jays, and that came out, and then next thing you know, he's got a course on Maven telling you about AI and UX. Based on what? You don't use AI. I didn't even think you were still doing UX work. So what is this based on? So I don't get it. I just don't get it. I'm I do my best to be honest and offer interesting products and services and courses and experiences and everybody wants Jared Spool's AI course. He probably an AI write it. What does the man know about AI? And can he prove he is using it in the work and refining his process over time? Uh Anna Lucia says, KB, today I saw an interaction between two people on LinkedIn saying that AI is making us closer to God. And I was like, What? I don't even know what to say. Yeah. Yeah, they I think they claim that that is AI-induced psychosis. When you start believing that AI is God or leading you down a spiritual path or it's bringing you to enlightenment. I the they're calling that AI psychosis. Uh Michael is in the process of typing something. My definitions. Tell us, Michael. By the way, especially for Americans, other than Scotch and Glory, guess how much this two-liter bottle of water costs where I live? The winner gets nothing, but we can create a prize if you would like one. Yeah, so in the end, I think there's a lot for us to think about. Uh, I'm not an American, I can make a guess. Go ahead. Um there's a lot for us to think about the interplay of layoffs and AI and uh quality and lack of quality, and where insidification is gonna be allowed to continue, and where it's uh not gonna fly anymore. Um there I can think of a whole bunch of companies I've left in the last few years because I feel like my standards have gone up. And Alusia says 60 cents Euros, 20 euro cents, or I think they might have raised it to 23. It was 17 when I first moved here. KB says, interesting, yeah. The more you see that become mainstream, we're in trouble. This also reminds me of how we talked about before before of how UX became like a touchy-feely spiritual thing. I think you'll see AI follow the same trajectory, and then they won't sell you on prompting. They will have behavioral courses on learning how to trust AI. I wouldn't be surprised if that already exists. Um I know that in the um, so some of you know I got my ACC from the ICF uh as a coach, uh doing life and personal development coaching. And and the ICF uh put out a whole new code of ethics last year, but now they're putting out videos about like what's AI's role in coaching. And my thought is summarizing my notes, and that's it. I do not want AI to be anybody's life coach. I do not want human interactions for personal development to be replaced by asking AI these things. I I uh I look, I know people will do it. I get it. Sometimes you're sad and scared and alone, and you can't poke a friend and you ask Claude, fine, but uh I I'm not gonna poop on you for that, but it can't be the only thing you're going to. There's still going to be a need for therapy for mental health professionals, for coaches. Okay, bye. Thanks for coming. Uh that was to Darren. Anna Lucia says, I don't trust AI to connect to my spirit or soul or whatever you call it. It's the same as trusting one of those fake gurus saying BS. Yeah, there's so much slop out there now. Uh oh, Michael, did we get your definitions? Are you still typing? Did that come through? Didn't come through on my screen. I just got my definitions, and I don't know if you sent emojis or anything. We couldn't see it. Yeah, anyway, we've been at this for an hour. Thanks to everybody for hanging out. Oh, let's read what Michael wrote finally. Uh Michael says, an AI expert, someone that knows how to use many of the different tools. For example, seeing the magic of how fast AI can dissect multiple XML files, extracting data from different file sources, and compiling it for you, saving you a lot of time. An AI specialist, people that are not developers, having AI use APIs to connect to a variety of different servers to make a service. Developers, I don't see as AI specialists, but as programmers already as amazing at their jobs, knowing how to take full advantage of AI as a great tool in code checking. Yeah, I'm doing those first two things, I would say. So uh yeah, maybe do I get to call myself an expert? I don't know. That's the weird part. I feel like I know so much more than so many other people, but then there are so many other people who know so much more than I do. So who is the expert? And would I be more of an expert over time, or would I be more of an expert if I took a course or a certificate or a degree? I don't even know. Uh so many jobs say, oh, well, you have to have been working in AI for years. Well, no, I don't have that. So does that mean it's just another thing not worth going to school for because they're not gonna give me a weigh-in anyway? It's also hard to predict, which is why I continue recommending my life after tech book for people who want to figure out what their next career move might be. I'm doing another Life After Tech cohort if people want to get some personal feedback and coaching on uh other career and work ideas. We've got two people signed up already, which means the cohort is happening. All right, I should uh oh, KB says, I feel like the subtext of learn to use AI is accept that you will be replaced and prepare to be okay with it. I think that's the real intent behind the message. I don't know. I feel like I definitely see the overt messages on LinkedIn like prepare to be replaced by AI or prepare to be replaced by someone who uses it better than you do. So you better learn it, whatever that means. It's magically never said. Uh, I think learn to use AI is just gaslighting. And I whenever I see that, I always say, what do you want people to learn? What should people be able to do? What should they know? The what should they study that they they don't know now? People don't even know. You just keep seeing learn to use AI, or AI will replace you, or someone who uses it better than you do will replace you. Tell me what to learn. You're you're you're dangling the stake that I will be able to save my own job if only I just learn a thing you won't tell me what it is. Well, then that's just a game. That's just a shitty game, and I don't want to play. So I'm not going to play. KB says, yeah, the only thing you can learn in that situation is acceptance. And Lucia says, I'm still a bit wary of the learn to use AI because we've been through so many of those learn how that ended up in nothing like blockchain and NFTs. Totally agree. And someone posted uh uh something to the Discord community today that was a screenshot that included someone saying the design thinking process was too slow. And I was like, the whole point of design thinking was to take a process that took weeks and make it take days, but now that's too slow. So uh does anybody care about quality or do we only care about speed? Magically we care about quality when it's a haircut or a parachute or our children's education or the safety of our car, then we don't want anybody to rush. But when it's digital and software, ah, you're all too slow, you dinosaurs. Well, as I keep saying, I'm working faster and more efficiently than I ever have. But people don't want to hire me because I'm older than they are. Okay. I'm the best worker I've ever been. I've never been better than now. All right, uh, let's play show-ending music and wrap up for today. Uh, so thanks again to everybody for joining. Uh, if you're watching it later on YouTube, please press like and comment and hype to help us in the algorithm. If you're catching it on the audio-only podcast, um join our community. We'd love to hear from you. Come agree and disagree and give us some other angles and perspectives. Um, so everybody have a super rest of your day or night, and I'll be back tomorrow with the Ask Me Anything stream. Thanks. Bye.