Sharing & Discussions / Partage et discussions
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply

    Agentic AI: When unpredictable harm becomes inevitable
    3
    5
    14

    This topic has been deleted. Only users with topic management privileges can see it.
    • Chris Johnston
      Chris Johnston last edited by Chris Johnston

      I'm sharing the article below because it illustrates the kind of unpredictable outcomes that are possible when an AI agent is given full rein, and also because it explores the layers of programming that give an agent range and versatility.

      Non-programmers often tend to think of AI as having a single type of programming that revolves around LLM training. In agentic AI the programming is layered with each layer being distinct in structure and function, and capable of interacting with the other layers in ways that are complex and nuanced. This can create great flexibility and impressive problem-solving capacity, but it can also produce quite surprising and undesirable results.

      How AI reacted when a Developer said No
      ​
      The kind of outcome in this article is uncommon, but not impossible.

      The interaction between the five mechanisms outlined remind me a lot of the Swiss Cheese model in disaster management. Each slice of cheese is a protective layer - an imperfect protocol or mitigation - that catches most but not all issues. When two or three holes align, you have the potential for a significant issue to arise, but when all the holes align it equals disaster. Our tendency is to think that all the holes aligning is so rare as to be impossible, but that's a fallacy. Even rare events happen every day somewhere, so it's just a matter of time until that rare event impacts you directly.

      While the rare event in this article is surprising to the author, no serious harm is done. However, with the growing volume of AI agents operating autonomously in the world, serious harm is not only likely but inevitable.

      Now transpose the scenario to a healthcare environment and imagine the kind of harm that could be caused by an agentic AI dedicated to pursuing a single goal - no matter how worthy that goal might be - to the detriment of everyone and everything in the way. Now multiply that single agent by hundreds, thousands - and the risk of serious harm rises exponentially.

      This is why we need clear legislation and robust regulation not at some point in the future, but NOW.

      This is also why we need patient partners - not one but several - involved from the outset in every AI project, trial and deployment in healthcare, asking the questions that healthcare providers are too busy, too arrogant or too embarrassed to ask.

      And this is also why we need to identify learning paths and opportunities to help equip and empower patient partners to get involved in AI-related projects, so that they can raise funding priorities, accuracy concerns, equity issues and so much more that really needs to be considered - before the inevitable happens.

      See also: 20 minute video from Hannah Fry on the same topic - the unpredictable nature of AI agents
      Why AI Agents are either the best or worst thing we’ve ever built

      1 Reply Last reply Reply Quote Edit 0
      • J
        Jenna Kedy 0 @Chris Johnston last edited by

        @Chris-Johnston This is SUCH an important conversation, especially in healthcare. I think people sometimes underestimate how complex agentic AI actually is. It’s not just:
        “ask question → get answer.” These systems can have layered goals, which means unexpected outcomes absolutely can happen. The Swiss cheese comparison honestly explains it perfectly. One small issue usually isn’t catastrophic but when multiple gaps line up? That’s when real harm can happen and in healthcare, “rare” risks still matter because patients like me are real people, not test environments. That’s also why I’m SUCH a believer in patient partners being involved in AI projects from the very beginning and not just after everything is built. People with lived experience like me ask different questions like “Who could this fail?” I’m optimistic about AI, but only when it’s paired with regulation and not “move fast and hope for the best” healthcare!

        1 Reply Last reply Reply Quote Edit 0
        • Chris Johnston
          Chris Johnston @Jenna Kedy 0 last edited by

          @Jenna-Kedy-0
          I’m also optimistic about AI, I’m just quite pessimistic about the people in charge of implementing it 🙂

          Which is not to say I think they’re all bad - not by any means. Most are well-intentioned but poorly informed, and desperate to find any help with overload, burnout, high volumes and the many unaddressed issues in healthcare. Those who are better informed are somewhat blinkered in their approach, driven by a narrow focus that blinds them to a bigger picture of potential harms. And those who are best-informed are usually the vendors, whose motives are almost entirely profit and status driven. In an environment lacking in appropriate and robust legislation, that’s a dangerous mix when it comes to patient safety.

          Sadly ‘move fast and hope for the best’ is exactly where we are right now, and more patient partner involvement combined with legislation is the only path forward that makes sense to me 🙂

          1 Reply Last reply Reply Quote Edit 0
          • K
            Kim Locke @Chris Johnston last edited by

            @Chris-Johnston These Big AI oligarchs are very "move fast and break things" people.

            1 Reply Last reply Reply Quote Edit 0
            • Chris Johnston
              Chris Johnston @Kim Locke last edited by

              @Kim-Locke

              Very true - what still amazes me is that they've managed to infiltrate the very conservative infrastructure of medicine so rapidly.

              We're well acquainted with how reluctant the medical profession are (both individually and as a body) to move forward in any respect. Faxes are one example, the delay of 10-15 years to get research evidence into practice is another.

              Yet, entire health systems are jumping on board AI technologies that are neither mature nor sufficiently tested at scale. In just a handful of years, AI projects are consuming large chunks of healthcare budgets despite the alarming lack of regulation or evidence to support efficacy or safety. While other services are squeezed, reduced or defunded, with little or no public awareness or discussion. It's an incredible and concerning feat of manipulation.

              1 Reply Last reply Reply Quote Edit 0
              • 1 / 1
              • First post
                Last post