Delta CX Hive
Supporting wherever you are headed on life's and work's paths. Episodes tend to focus on an area of work advice or personal improvement. These are curated from our "Take Action Tuesday" livestreams and other webinars, conversations, and events.
Join our free Discord community [dcx.to/discord] or free Patreon [patreon.com/deltacxhive]. Catch our live streams on Twitch [twitch.tv/deltacxhive].
Check https://dcx.to/events for upcoming shows and webinars, which are typically streamed to Twitch, YouTube, and LinkedIn. They're then archived on LinkedIn and pushed to podcast platforms.
Delta CX Hive
Ep 291: "Human In The Loop" vs AI Running Wild
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
We'll read and discuss my new article about how the "human in the loop" isn't enough. https://medium.com/r-before-d/human-in-the-loop-vs-ai-running-wild-265cb3e38368
Holy cats, it's here! You can now buy my new book in most formats. https://AtomicPMF.com
Check out my upcoming free webinar about better product-market fit and using AI thoughtfully in UX qual research! I'm also running a Life After Tech cohort to help you brainstorm next career moves or work areas. https://atomicpmf.com/courses
Join our free community. All links at https://dcx.to
All episodes are marked "explicit" since sometimes we use swear words.
Welcome, low ego, action heroes and phoenixes. I'm Debbie Levitt from Delta CX, and today is episode 291: Human in the Loop versus AI running wild. Welcome to the people listening audio only on the podcast. If you're watching live, you might be wondering why I'm streaming at a strange time. We're going to be experimenting with different times, so let me know which times you prefer. Right now it is 3 p.m. Italy time. So let's jump right in. The Human in the Loop story came from a viral post that some of you may have seen. So I'm going to share my article about it. As always, my articles are on Medium without a paywall. If you want to read them, they're usually easy to find. My publication is rbeford.com for researchbeforedevelopment.com. And a quick note, I also have some courses coming up. So check them out. dx.to slash courses. We've got the course on uh using AI and qualitative research and also a little bit about my atomic product market fit model mixed in as one course. And we've got another Life After Tech cohort coming up. So if you would like some personal feedback and coaching as you start to plan and think about other work areas or careers, you don't have to quit tech, you don't have to leave your job, but let's think about the future. So uh check those out dcx.to slash courses. So I'm gonna share my screen and I'm going to read this article and I'm happy to turn it into a discussion or any questions if people have them. We might have nobody here because this is a weird time for me to be streaming, but it's an experiment. Why not? Um, okay, so this article is from late February 2026. I'm talking to you in late March 2026, and it's called Human in the Loop versus AI Running Wild. You probably saw this post going viral over the last few weeks. And I'll read you the post. We just found out that our AI has been making up analytics data for three months, and I'm gonna throw up. So we've been using an AI agent since November to answer leadership questions about metrics. It seemed amazing at first. Fast answers, detailed explanations, everyone loved it. I just found out it's been hallucinating numbers this whole time. Our VP of sales made territory decisions based on data that didn't exist. Our CFO showed the board a deck with fake insights. The AI was just inventing plausible sounding percentages. I only caught it by accident when someone asked me to double check something. I started digging and holy shit, it's bad. The numbers were sometimes from the wrong time periods, sometimes mixed up products, and sometimes just completely made up. But it explained everything so confidently that nobody questioned it. Now we have to review every Q4 decision. Legal is involved, people might get fired. The worst part? I raised concerns about needing validation in November and got told I was slowing down innovation. Tell me someone else has dealt with this. How do you even fix something like this? I'm panicking. End of post. Plenty of people are calling for more human in the loop, which is defined as, quote, a system where human input is actively involved in the operation and decision-making process of automated systems, particularly in artificial intelligence and machine learning. This approach enhances the accuracy and reliability of models by allowing humans to provide feedback, correct errors, and ensure ethical decision making. What I notice in that story, uh end quote, uh what I notice in that story is that plenty of humans were involved. They made decisions. We need to do more than just say, human in the loop. We need more critical thinking brought into our uses of AI. The question is, where do we put the human in the loop? And what do they do while in that loop? Humans were in the loop, but they weren't doing what was needed at mission critical moments. Looking back at the story, quote, our VP of sales made territory decisions based on data that didn't exist. Our CFO showed the board a deck with fake insights. End quote. The VP of sales probably has people working under them who might have used the AI and generated the data. They should have double checked the data. We should catch data that didn't exist very early. The CFO had slides with fake insights. Someone works under the CFO who worked with AI and generated the data, the insights, the slides, etc. The gas lighting is off the charts. Quote, the worst part, I raised concerns about needing validation in November and got told that I was slowing down innovation. End quote. Many of us have heard similar things. If we don't use more AI, we're dinosaurs that hate progress. We might not fit at a fast-moving company using the latest tools. How wrong could it be? AI is really smart. It's our data, so it must be right. AI sounded confident, but don't waste time checking it. Your slowing down innovation is a special kind of gaslighting. You're not just slow, you are stopping a probably international corporation from inventing new things humans have never seen before by wanting to double check some data. Human in the loop, a task perspective. The task isn't get outputs from AI and push them to the next person in the assembly line. The task is use AI ethically and thoughtfully to do good work faster. If you are slower or less accurate, you have the wrong AI tool, or you're not using it correctly, it's a good reminder to document all of the decisions made on a project or decisions someone else made about your time or work. When someone wants to know why you didn't check the data, you'll have a Teams or email screenshot from someone telling you you're blocking innovation. It's not in writing, put it in writing. After that discussion, email your manager that you are confirming that they don't want you to double check the data the AI is working with. Hopefully they respond, but even if they don't, you have a paper trail of email sent after a discussion. The root cause is bad data and data that didn't exist. Everything else is a symptom or a consequence. Once you have bad data, incorrect data, or made-up data in your dataset, you're in trouble. What can you do earlier to double check this? One, you can ask your AI to double check the data. Sometimes, on a second pass, a decent AI will notice that it invented information or transposed information incorrectly. AI can be decent at double checking, but you have to ask. Clauds have told me that they sometimes can't seem to stop themselves from hallucinating, but if you ask them to fact check or check the sources, they'll stop, do that, and then admit mistakes. You can still catch this early. Double check the data and its source. Did AI generate an insight or suggestion? Double check that. The bigger the decision, the more you check. Are we making decisions that affect product, services, corporate strategy, territories, etc.? Then the time spent checking AI is an investment, not a waste. If nothing else, remember that you don't want to end up a story like this one. This is avoidable. This isn't a mistake. This is a series of bad decisions. Starting with gaslighting the worker who wanted to check the data months ago. It might only take a few minutes for AI or humans to run some fact checks. I recently had a Claude instance work on some qual research analysis. It gave me a bunch of quotes from my transcripts. I didn't remember someone saying they would pay 30 to 50 per month for the service. I asked it to search the transcripts and find who said anything close to this. It had to admit that nobody said that. I can then take it out of the analysis and make sure it's not in the report. That took minutes. If Claude had said yes, the participant for said that, and I still didn't remember it, I can take a few minutes to go into my transcript and search for 30 or any words that'll help me find if this was actually said or not. We must stop acting like every minute we invest in quality is a minute taken away from innovation. Our company might not be innovating when we make sales territory decisions. We might not be innovating much or at all, but we do make important decisions all day long. Decisions that affect customers and users. Decisions that affect our coworkers and teams. When we care about important decisions, we double check them. Period. Even if there is no AI involved, we should double check data and insights before making important decisions. End of article. Okay, Anna Lucia's here, and she's left a couple of comments. She said, I also heard that phrase about slowing down innovation when we were talking about regulations. The EU wants to regulate things, and many companies, mostly American ones, say that's what's wrong with the EU. Well, maybe some things, but I don't think all regulations are wrong to begin with. Anna Lucia also says, be careful with asking AI to double check. Sometimes they hallucinate again. They say you're right and make up something else again. Yes, we have to be more than a human in a loop. We have to be a very active participant and thought partner, not uh just button presser. But going back to your comment about the EU and how some people say it's blocking uh regulations will block innovation. What I find is that if your innovation would be blocked by decent laws around privacy or security or use of data, then what the hell are you innovating? Or what the heck do you think you're innovating? Because it sounds like you are trying to play in the sandbox of bad data, bad privacy, bad ethics, and and dot dot dot bad. So if you think innovation is slowed down by regulations, it doesn't have to be, but maybe unethical innovation or products and services would be slowed down by laws and regulations. I think I'm a very innovative person and I am not slowed down by laws and regulations because I have to work with them and around them. They don't block me. They're there for reasons. Anna Lucia says, to be honest, I still think AI is being too widely implemented when it's not reliable. As you and other people said before, if it was a coworker making those mistakes, they would be fired. But we are being asked to be patient about AI? What about no? Yeah, there's a lot of things that get rushed out to people in the names of all kinds of things: innovation, progress, technology, the future. And um, and they're not always good for humans or work or the environment or children or whatever it might be. And uh there's a lot of let's break it and fix it later, which I never agreed with. I never thought that was cool, I never thought that was smart, and I never thought that that was strategic. So I think that we have to realize that uh if a human did what a lot of AI is doing, it would be fired. It would be on a performance improvement plan and then fail it. And we should take a second look at uh at AI. And what I find is that um uh at a lot of companies, there's no accountability anymore. There's no accountability for people, there's no accountability for decisions. So why would there be accountability for AI? There has to be a larger call uh and return to actual accountability. Otherwise, we hold nothing responsible for bad work, bad decisions, bad research, bad data, bad manipulations, bad decisions. There's a lot of bad out there and people are I I think I said a few books ago that people are going to continue cycles of bad everything when they can't get in trouble for it. If you could lose your job for making bad decisions, if you could lose your job for losing the company money and making customers leave, everybody would try harder. Everybody would be more careful, and we'd have to be more careful with AI and our AI tools and other technologies as well. But when you can create all kinds of chaos uh and be totally toxic and destructive and not be held responsible, everybody learns that it's okay to do that. Anna Lucia says, you know I'm not against AI, but it's not reliable to substitute a person with it. Using it for some tasks, okay, that can be doable. Using it instead of humans, that's too much. AI seems to have the maturity of a kid. Would you trust the kid to do all of your tasks? Yeah, and again, we're already seeing some of that walking back where companies uh fired or laid off a lot of people. They said AI would do more of the work. Now we're starting to see server crashes and code being erased and serious downtime here in March 2026, whenever you're watching this. And um, and now they're going, oh crap, I guess we'd better uh give our workers the time to double check what AI is doing. And and that should be that's minimum viable AI, in my opinion. You you can't just let it run wild. You have to be checking it, you have to check it while you're working with it, you have to check it after you've asked it to do something. There has to be constant checking and critical thinking and quality checks. But again, if we're at companies with no or low standards for quality or no or low standards for accountability, then all kinds of bad decisions and catastrophes can happen. You know, these servers went down, this code got erased. Did anyone get fired for that? I feel like we haven't heard about someone getting fired, a leader being fired for messing everything up. I feel like we haven't heard about that in years. I don't celebrate when most people lose their jobs, but I'm okay when when bad people doing a bad job lose their job. You are in the wrong place and you have proven it through poor performance and poor outcomes. So there you go. Um, so I'm gonna try to keep some of these take action Tuesdays a little bit shorter. Uh, that's about 17 minutes. So, unless there are any other questions or related discussion topics, I will wind down for today. I'll be around tomorrow at the usual ask me anything uh session. And uh that would be that. So everybody, please talk to your companies about human not just being in the loop, but human being extremely active and participatory and critical thinking and checking and double checking and thinking critically. And as Anna Lucia says, decide which tasks you're gonna give AI. If you're quicker than AI, why would you give it the task? Or if you're burning too much time correcting it, why are you giving AI the task? And yes, these are things I've been saying for years, so I certainly uh agree. So again, let's not lose sight of good decision making, good strategy, and good critical thinking. Just because we have a shiny new thing called AI that some things sometimes does some things well. Uh okay, that's my thoughts for today. Thanks to everybody who's joining live or watching this later or catching it later as our audio-only podcast. Um, and don't forget to check out my uh courses. I've got a bunch of things coming uh here in uh April and May that are already planned and live and ready to go, both on the AI and product market fit side of my world and the life after tech. Let's help you think about new work areas side of my world. So dcx dot slash courses. Um, everybody check those out. Um thanks and have a great rest of your day.