How AI Agents Reshape Our Work And Ownership Of Knowledge
Why Leaders of Hybrid Teams Must Evolve the Approach and Clarify Who Owns the Agents' Knowledge When the Employement Ends
It’s gotten quiet lately. “AI isn’t going to replace you—a person using AI will” has become softer and less prevalent. 2025 is all about agents, yet, the groundbreaking things that shift industries or entire business functions are still in the distant future. Yet, embracing change is important as AI advances. Employees can take charge of their learning, adapt to shifting roles, and leverage AI for better productivity. Understanding responsibilities around AI and developing emotional intelligence is also key to success in this evolving landscape. Future of Work Expert Dan Sodergren and I explore how Agentic AI is reshaping our jobs. Here's what we talked about...
Understanding AI and its Role in the Workforce
As AI and agents evolve, we find ourselves asking what these changes mean for our jobs. A first compelling point is defining exactly what we mean by AI agents. Agents are distinct from basic chatbots. They show adaptability and autonomy, indicating a shift in how we approach work. We need to sharpen our understanding of AI and its capabilities to ensure we can work alongside AI systems effectively. By recognizing these distinctions, we can position ourselves better for the future.
Companies have been adopting AI, but we must also consider how these technologies can benefit our work. There's potential to take on objectives rather than just tasks, which requires a new skill set: clearly defining and communicating goals. Employees feel uncertain about job security when AI agents are able to take on more and more of their tasks. However, embracing a mindset of learning and adaptation can make navigating these changes smoother.
JOIN THE FIRST COHORT: Monetizing Agentic AI: Foundations
Agentic AI is the hottest topic of 2025. The right commercial model makes the difference between making cash or burning through it! Define the commercial model for AI agent-based solutions that will create new revenue streams and enthusiastic customers, based on 20+ years of senior leadership experience in enterprise software. We’ve adapted the concept and are offering a foundational version of our advanced course for the first cohort starting on April 14.
(Use code AGENTS100OFF to get $100 off.)
Evolving Shared Decision-Making and Accountability
A pressing concern is how shared decision-making with AI impacts accountability within organizations. When employees construct AI agents during work hours using company resources, questions arise regarding ownership and output. Who owns the agent? Who owns the knowledge that has gone into it? And what happens to the agent and intellectual property when an employee leaves? Shifts in roles and responsibilities are necessary to align with these evolving technologies. As professionals, being aware of this is crucial for navigating future workplace dynamics.
» The data that you are going to be producing in your job, is that your intellectual property? The old argument is ‘Of course not!”—But it's weird, isn't it? «
— Dan Sodergren, Future of Work Expert
The adjustment in company policies may now focus more on AI agents rather than traditional AI, which changes the landscape of our employment agreements and the entire employee experience. As AI will handle more aspects of our decision-making, it's essential to understand not just how to collaborate with these systems but also how they might impact our roles. Therefore, leaders and team members alike must prepare for a future where human oversight remains crucial as responsibilities shift.
Revisiting the Idea of Job Security
40-year long careers at one company are already a thing of the past. Even working for established industry leaders does no longer provide automatic job security that it used to a few years or decades ago. Consequently, job security will look even more different in a future driven by AI. The notion of long-term employment with a single company is fading. Instead, a scenario where many may juggle multiple short-term jobs facilitated by AI support might become the norm. AI can help facilitate this shift, allowing individuals to take on more work, but it also raises the question: What does this mean for our understanding of work-life balance?
In this new setting, emotional intelligence becomes increasingly important as automated systems take over many traditional tasks. Being indispensable in the workplace will soon rely on much more than technical know-how. The ability to empathize, collaborate, and communicate effectively will be key differentiators in obtaining and maintaining employment. With AI automating skilled tasks, those of us in the workforce must hone these softer skills to thrive. As individuals, our focus shifts from mere productivity to building and enhancing relationships, leading to a future where we can grow as not just workers, but as rounded individuals.
Summary
For anyone to navigate this transition of how we work, understanding AI and its implications is important. Employees must redefine their role in collaboration with AI agents and embrace continuous learning. Shared decision-making will change the accountability landscape, requiring nuanced understanding and adaptation from workers and leaders alike. Finally, developing emotional intelligence and soft skills is paramount as AI systems reshape the workforce.
By preparing for these changes and embracing the opportunities they provide, we can position ourselves effectively in this new world where AI can help enhance our lives and work experiences.
The future of work is evolving fast! Equip your team with the knowledge and skills to leverage AI agents effectively. Schedule a consultation or workshop to develop your AI strategy.
Listen to this episode on the podcast: Apple Podcasts | Other platforms
Explore related articles
Become an AI Leader
Join my bi-weekly live stream and podcast for leaders and hands-on practitioners. Each episode features a different guest who shares their AI journey and actionable insights. Learn from your peers how you can lead artificial intelligence, generative AI, agentic AI, and automation in business with confidence.
Join us live
March 25 (members-only) - Camila Manera (Chief AI Officer) will share how to bridge the gap between AI and the business.
March 31 (members-only) - Matt van Itallie (CEO of Sema) will join and share how vibecoding impacts developers and investors down the line.
April 01 (members-only) - Peter Gostev (Head of AI at Moonpig) will talk about how not to get fooled by Agentic AI claims.
April 08 (members-only) - Maxim Ioffe (Head of Automation at WESCO Distribution) will discuss setting AI governance programs between IT and the business.
May 28 (members-only) - Barr Moses (CEO & Co-Founder of Monte Carlo) will provide insights into having reliable data for AI and Agentic AI projects.
Watch the latest episodes or listen to the podcast
Follow me on LinkedIn for daily posts about how you can lead AI in business with confidence. Activate notifications (🔔) and never miss an update.
Together, let’s turn hype into outcome. 👍🏻
—Andreas
I love the shared decision-making and accountability paradigm, one that requires a major mindset shift. At what point do we decide that it's the agent's fault? Ultimately, humans design them and decide when to deploy them, so they have to share in the consequences. Frankly, that's a very tough spot to be in, at least for now.
Something I wrote some time ago:
"Next level hearsay - AI version 🤔
Welcome to 2025, the year of the agents! We need to prepare ourselves for claims like these: "Agent X made the decision based on agent Y's input that acted at the request/bequest of agent Z."
Care to provide proof of that? 🤦♀️
Funny how humans take AI outputs as gospel when it suits them, but blame the AI when things go wrong. From "AI wrote this brilliant post" to "Agent X made Agent Y tell Agent Z to do it" - hearsay just got exponentially more complex.
See you in AI court?
"
I have a feeling this can play out in a couple different ways:
1) People view agents like their do their peers. Whenever someone blames their colleague for not doing X, they will blame their agent now. That’s mainly a matter of accountability and ownership (or lack thereof).
2) Agents maintain a log of their analyses and decisions (from an “inner dialog” to communication between agents). That should make it easier to have objective, factual information to go back to and review why a decision has been made by the agent—unlike in 1) where we don’t have that log of a person’s thought process and decisions.
So, in a way, agents might actually increase decision transparency.