The AI Replacement Illusion | Data Flavors #13
Data Flavors is Back (And Different): Problem Space: When AI Promises Meet Human Realities
Happy Friday, Data Shokunin-deshi! Welcome to Data Flavors #13.
The AI revolution is reshaping the workplace, often in ways we don't expect. This issue is a deep dive into a growing tension: when efficiency gains don't equate to true value, leading to unforeseen consequences for jobs and trust.
This post is for open for all. If you haven’t upgraded yet, now’s the perfect time. As a Paid Subscriber, you’ll gain full access to all my content.
What to Expect in This Edition
3 short stories that reflect the AI shakeup
Bold actions to future-proof your role
Trends shaping today’s job market
A reflection prompt and resources for your next move
But first, this newsletter has evolved.
After 102 days of rethinking in the face of the data world’s rapid evolution, Data Flavors is transforming. My goal is to provide a much-needed informed perspective, not just more tutorials.
We're shifting to a monthly, leadership-focused publication that offers deeply researched, opinion-driven explorations of a single business problem per issue. This new format prioritizes thought over consumption, delivering honest takes and actionable insights within a 1,200-word limit to foster more conversation. If the original multi-flavor approach was more your style, consider
’s newsletter ‘Alternative Data Weekly’Also, just reply “Here” if you’re still reading. It means a lot, and helps me know this work matters.
Welcome back to Data Flavors!
Learning in public, one data problem at a time
Problem Space: When AI Promises Meet Human Realities
Last week, two LinkedIn posts caught my attention and crystallized something I've been wrestling with.
First, Jack Godau1 shared how Klarna quietly rolled back their "Everything AI" approach, returning to human customer service agents after replacing 700 people with chatbots. What struck me wasn't just the rollback—it was how little noise this made compared to their original bold announcement.
Then Joe DeWulf posted2 about a conversation with a lawyer who bills $1,100/hour. The lawyer said he wasn't worried about AI because "with AI, our firm is now 60% more efficient but billing the same amount on projects."
Meanwhile, a friend is working on a "secret project" to replace an entire 19-person data team with just 5 people: 2 engineers, 2 analysts, and a product manager. The plan? Push analysis work to data consumers who'll explore data themselves using language models on top of SQL databases.
Three scenarios, same underlying tension: Are we rushing toward AI-driven workforce changes faster than we understand their implications for value, ethics, and sustainability?
AI isn't just coming for repetitive jobs. It's reshaping what “safe” even means.
In just the past year:
A top-performing data director friend of mine was laid off, not due to performance, but because the company restructured based on AI capabilities.
I struggle with my job search, as it seems many companies are not hiring
A brilliant marketer I know switched jobs after being unemployed for over a year
These are not isolated cases. They're a sign of a shift.
The Hard Truth About AI & Jobs (What the Numbers Say)
Artificial intelligence is rapidly disrupting jobs. Companies are aggressively restructuring:
Meta3: Cut ~5% of staff (3,600 people), citing AI-driven efficiency.
UPS4: Plans to lay off 20,000 workers in 2025, driven by automation.
Intuit(4): Laid off 1,800 employees to reinvest in AI.
Cisco(4): Cut 5,900 roles, shifting to AI-focused business units.
Klarna(4): Replaced 1,000 roles with AI, reducing workforce by 10%.
Global analysts predict a massive shift: Goldman Sachs5 estimates AI may impact up to 300 million jobs globally, 85 million jobs lost, 97 million created, but not for the same people. No surprise, 52% of U.S. workers worry AI threatens their job security (Pew Research).
It’s not just headlines—it’s personal.
The AI Paradox: Productivity ≠ Value
The recent post by Joe DeWulf stuck with me. Joe’s reaction: “This feels ethically wrong.” I couldn’t agree more. It's a perfect snapshot of AI’s paradox.
On the surface, this sounds like a win: better tools, faster output, same revenue. But scratch deeper, and it raises uncomfortable questions. Who benefits from this new efficiency? The client sees no savings. The lawyer earns more for less effort. The system rewards productivity, but only for the provider.
It’s not just about fairness. It’s about TRUST.
When AI quietly slips into services without changing the cost, it erodes confidence in the system. Clients may start to wonder: Am I paying for expertise, or subsidizing someone's tooling upgrade?
And more than that—it’s a missed opportunity.
AI could help this lawyer serve more clients, offer tiered pricing, improve access to legal help, or even rethink how legal work is packaged. Instead, it’s used to quietly protect margins.
This isn't about one greedy lawyer. It's a mirror. Many industries are facing this same crossroads. Will AI be used to extract more value from customers, or create more value for them? That’s the real ethical frontier.
This same paradox is playing out everywhere:
Klarna’s rollback of AI support wasn’t because it didn’t work, but because it didn’t feel human.
Duolingo6 froze hiring and leaned into AI, but faced backlash over quality concerns.
A friend plans to cut a 19-person data team to 5, shifting analysis to business users via LLMs. Will they get better insights—or worse decisions?
We’re treating AI as a cost-cutter, not a value enhancer. And that mindset has a cost.
Why This Is A Problem
The rise of AI presents critical problems: who benefits from efficiency gains, the gap between executive AI goals and actual implementation, and a lack of clarity on the human role in AI-powered systems. These unresolved issues pose significant risks to careers, customer trust, and ethical business practices7.
Where We’re Missing the Point
We're often measuring AI success by superficial metrics like cost per ticket or lines of code written. But what about:
Trust lost from poor AI-generated customer support?
Unexpectedly high operational costs due to complex AI infrastructure or large model usage?
Hidden risks in legal documents written by a chatbot?
Knowledge gaps when marketers deploy unsecured tools?
Take this example: I used ChatGPT to draft a privacy doc for a new product. It looked solid—until my privacy-lawyer friend reviewed it. It was riddled with errors that could’ve exposed me legally. It sounded right, until you dug into the context.. That’s the danger.
Or the indie dev who launched a “vibe coding” app with a paywall, without any security. His system was hacked, users’ data stolen, and he admitted he didn’t understand the risks. A marketer, not a developer, built it fast and loose, with tools he didn’t fully grasp.
Speed is easy. Value is harder. Trust is everything.
What I Think I Know—and What I Don’t
Based on my experience at Zalando and recent observations, I'm developing a hypothesis that challenges both extremes of this debate.
What I think I know: AI excels at augmentation, not replacement, at least not yet. At Zalando, we didn't need 18 marketers when we had strong AI tooling; 5 could achieve the same outcomes. But "5 with AI" is fundamentally different from "0 with AI." Are we trying to do MORE with LESS, or just MORE with NONE?
What surprises me: I expected data teams to become more strategic and visionary in the AI age. Instead, I'm seeing data leaders become more hands-on, even at director levels. This puzzles me because leveraging AI effectively should require more strategy and vision, not less.
What concerns me: The human element in data work, understanding context, asking the right questions, and interpreting nuances, seems undervalued in the rush to automate. When customers have financial issues or overcharges, they want human understanding, not robot efficiency.
Where I'm uncertain: How do we balance the genuine productivity gains AI enables with the irreplaceable value of human judgment, especially in trust-dependent interactions?
Real Stories, Real Friction
The patterns we're discussing extend far beyond isolated cases. Duolingo, for instance, recently announced an "AI-first" strategy, replacing contractors and freezing hiring for automatable roles. Box and Shopify followed suit. Yet, responses are telling: Duolingo users are canceling subscriptions, questioning AI's grasp of language nuances and cultural context.
The Klarna case is fascinating because it's not a simple "failure." While rolling back customer service AI, they report:
40% reduction in cost per transaction
152% increase in revenue per employee
Maintained customer satisfaction scores (according to their metrics), which I would love to learn how they calculate.
Yet, customers loudly complained about "Kafkaesque loops" and a stark lack of empathy. This suggests efficiency doesn't always equal effectiveness, a crucial distinction still being unpacked. Klarna learned that context matters: AI wins for marketing optimization, but human empathy wins for financial stress. The lesson isn't "AI failed" or "AI succeeded"; it's that "AI works differently in different contexts."
Adding another layer, the lawyer from Joe DeWulf's post operates in a similar gray area. His clients likely don't know AI is involved, so satisfaction scores might look fine while the lawyer's value extraction increases. Is this innovation or exploitation?
My friend's data team transformation also raises critical questions. Shifting analysis to business users via LLMs sounds democratizing. But will it create effective data analysts, or new bottlenecks, quality issues, and even prohibitively high data costs?
Consider a small, personal example: I used ChatGPT to draft a privacy document for a new product. It looked professional and used legal jargon, but my privacy lawyer friend, Ruth, was shocked. It was riddled with errors that could have exposed me legally. The document sounded right, but critical context was missing. That's the danger.
Similarly, an indie developer launched8 a "vibe coding9" app with a paywall. Users quickly found ways to bypass it, and within days, the system was hacked, leading to data theft. The developer, a marketer by trade, admitted to building it "fast and loose" without understanding the security risks.
The disconnect is clear: We're often measuring the wrong things. Cost reduction and speed improvements are easy to quantify. But customer trust erosion, knowledge loss from laid-off employees, and long-term competitive sustainability are harder to measure yet potentially far more important. Think about the need to rehire agents fired by Klarna and the resulting gap it creates with customers, especially in fintech, this pressure can lead to long-term revenue drops.
Here's where it gets complicated: Klarna insists its AI transformation was ultimately successful10, despite the customer service rollback. Their marketing AI11 saw an 85% increase in click-through rates year-over-year, reduced cost per click by 33%, and drove a 2.6x increase in website traffic long-term.
This is the paradox in action: AI can deliver impressive metrics in one area while creating problems in another. Klarna learned that context matters. Marketing optimization? AI wins. Human empathy in financial stress? Humans win. The lesson isn't "AI failed" or "AI succeeded"; it's "AI works differently in different contexts."
So... What Should You Do Differently Now That AI Is Changing Everything?
Here are 5 bold actions I think can help us all to stay relevant and build a future-proof career:
Embrace and Learn About AI: Don’t be afraid of it—explore it. Use AI tools relevant to your domain. Stay curious.
Develop “AI-Proof” Human Skills: Invest in creativity, emotional intelligence, storytelling, negotiation, and strategic thinking, handcrafting —skills machines still can’t replicate well, or handmade will have higher value.
Commit to Continuous Learning: The shelf-life of skills is shrinking. Make self-upskilling a regular habit, not only when you lose your job task.
Improve Your Data Literacy: Understanding data and how it drives decisions makes you indispensable in an AI-first world.
Prepare for Human-AI Collaboration: The future isn't AI vs humans—it's AI with humans. Learn how to collaborate with AI, not just coexist.
Four Red Flags That AI Might Be Coming for Your Role
Before we talk solutions, let's be honest about risk. Here are warning signs that your position might be vulnerable:
Red Flag #1: Minimal use of emotional intelligence, creativity, or complex judgment
Red Flag #2: 60 %+ routine work you could easily explain to someone else
Red Flag #3: Tasks already being automated at other companies (customer service, document analysis, scheduling)
Red Flag #4: Leadership explicitly linking AI adoption to workforce reductions
If you checked 2+ boxes, it's time to act.
Key Takeaways & What I'm Still Learning
What seems clear: The most successful AI implementations will likely be augmentation-first, not replacement-first. Klarna's pivot to "AI gives us speed, talent gives us empathy" feels more sustainable than their original "AI does everything" approach. Also, my friend confirms that in the testing they have done, the human factor was important in the success of building the data pipelines and the visualization, and knowledgeable people had better success.
My emerging framework: Even when AI gets good enough to replace 90% of tech employees (and I believe we'll get there), we'll still need humans in four critical roles around the machine:
The Controller: Ensuring costs don't spiral and the system makes money instead of losing it daily
The Analyst: Checking and validating system operations, ensuring it functions as expected with the right data
The Challenger: Pushing the system when it gets comfortable, finding new opportunities, and preventing stagnation
The Cash Cow: Someone with the budget and authority to support the system until it reaches break-even
What remains murky: How do we measure the true ROI of AI when traditional metrics miss qualitative impacts? Customer satisfaction scores might stay stable while customer trust erodes—how do we capture that? Especially concerning when we're replacing UX/UI designers with AI agents.
What's next: I see the strongest AI-driven projects emerging from teams working the problem space daily. They know the pain points, stakeholder expectations, and team limitations. With this information, we can create products that genuinely enhance their work and deliver focused value to users.
What I'm watching: Whether data teams will become obsolete or evolve into something more strategic. Whether the "vibe coding" trend will face similar rollbacks when prototypes meet production realities and security requirements.
Questions for the community:
When AI drives efficiency, who should capture that value: you, your customers, or shareholders?
How do you measure AI success beyond traditional cost and speed metrics, and what human roles remain truly irreplaceable in your industry?
Have you seen successful examples of AI-driven workforce transformation that preserved both efficiency and trust?
So here’s the question I’m wrestling with:
Are we designing AI to replace, or to elevate?
If it’s the latter, we need more than tech. We need leadership willing to pause, ask the uncomfortable questions, and rebuild trust from the ground up. In your team or product, where are you mistaking speed for value? Where is AI replacing human insight when it should be enhancing it?
This newsletter is a learning journey. I'm figuring this out as we go, and your perspectives make it better. Reply with your thoughts, experiences, or topics you'd like explored.
May your data flow with purpose!
Lior
Further Reading & Sources
Klarna's Q1 2025 Earnings: Official Report - Shows the financial gains despite customer service challenges
Customer Experience Dive Coverage: Klarna Changes Its AI Tune - Details on the hybrid approach pivot
Legal AI Tools Database: Retrieve.tools Legal Category - Comprehensive list of 127+ AI tools transforming legal work
Data Flavors #2: Data Flavors newsletter - First coverage about Klarna’s decisions to replace 700 CS agents with AI, and my view on it
Jack Godau's LinkedIn Analysis: A Practical perspective on the rollback implications
Joe DeWulf's Lawyer Billing Discussion: Explores the ethical implications of AI-driven efficiency without passing savings to clients
11 Jobs AI Could Replace In 2025—And 15+ Jobs That Are Safe - A digestible overview of AI job risk forecasts that helps contextualize the labor shifts we’re discussing, highlighting where adaptation is most urgent.
It’s Time To Get Concerned As More Companies Replace Workers With AI - Warns of growing corporate AI adoption displacing workers, reinforcing the urgency of thoughtful data strategy and ethical planning.
AI Automation Potential: Top U.S. Industries at Risk (Teneo) - An industry-by-industry breakdown of automation risk that supports strategic discussions around where businesses should focus upskilling and data alignment.
Duolingo's AI-First Announcement: Company LinkedIn Post - Direct communication about replacing contractors with AI automation
Beware, AI Coding Can Be a Security Nightmare - Developing open to the world software can be risky if you miss architecture, software development, and security knowledge
Vibe coding - Prompt-driven development, empowering a quick launch of apps
Klarna Reverses Course on AI Customer Support, Resumes Human Hiring - Klarna’s partial return to human support underscores the limitations of AI and the importance of hybrid strategies, core to our theme of balance.
5 AI Case Studies in Marketing - Showcases real-world marketing use cases where AI adds value, relevant to our exploration of when and how AI enhances (vs. replaces) human decision-making.