Discussion about this post

User's avatar
Ramona C Truta's avatar

I love the shared decision-making and accountability paradigm, one that requires a major mindset shift. At what point do we decide that it's the agent's fault? Ultimately, humans design them and decide when to deploy them, so they have to share in the consequences. Frankly, that's a very tough spot to be in, at least for now.

Something I wrote some time ago:

"Next level hearsay - AI version 🤔

Welcome to 2025, the year of the agents! We need to prepare ourselves for claims like these: "Agent X made the decision based on agent Y's input that acted at the request/bequest of agent Z."

Care to provide proof of that? 🤦‍♀️

Funny how humans take AI outputs as gospel when it suits them, but blame the AI when things go wrong. From "AI wrote this brilliant post" to "Agent X made Agent Y tell Agent Z to do it" - hearsay just got exponentially more complex.

See you in AI court?

"

Expand full comment
Andreas Welsch's avatar

I have a feeling this can play out in a couple different ways:

1) People view agents like their do their peers. Whenever someone blames their colleague for not doing X, they will blame their agent now. That’s mainly a matter of accountability and ownership (or lack thereof).

2) Agents maintain a log of their analyses and decisions (from an “inner dialog” to communication between agents). That should make it easier to have objective, factual information to go back to and review why a decision has been made by the agent—unlike in 1) where we don’t have that log of a person’s thought process and decisions.

So, in a way, agents might actually increase decision transparency.

Expand full comment

No posts