Education & Self Learning / Éducation et autoapprentissage
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply

    Digital Health Canada: AI in Healthcare Report
    2
    6
    40

    This topic has been deleted. Only users with topic management privileges can see it.
    • Chris Johnston
      Chris Johnston last edited by Chris Johnston

      An interesting report compiled by Digital Health Canada that explores how 8 different Canadian health organizations are implementing AI.

      From the Executive Summary:

      This report provides a comparative analysis of AI implementation across eight leading Canadian healthcare organizations. It examines governance structures, risk management strategies, operational models, and key barriers, identifying best practices and lessons learned. The findings serve as a resource for healthcare leaders navigating AI adoption in their own institutions.

      This report highlights the diverse strategies Canadian healthcare organizations are using to adopt and
      scale AI. While each organization brings a unique context, several cross-cutting insights emerged across governance, risk, operations, and sustainability.

      The 8 organizations include:

      • Fraser Health (British Columbia)
      • Hamilton Health Sciences (Ontario)
      • Hospital for Sick Children (Sick Kids), Ontario
      • UHN (Ontario)
      • Unity Health Toronto (Ontario)
      • McGill University Health Centre (Quebec)
      • Horizon Health (New Brunswick)
      • Nova Scotia Health (Nova Scotia)

      Click to view the report

      1 Reply Last reply Reply Quote Edit 0
      • Debra Turnbull
        Debra Turnbull @Chris Johnston last edited by

        @Chris-Johnston Hi Chris! This was a good read.
        My focus was on AI literacy and AI validation.

        AI literacy - how well does the general public (including clinicians) understand how this thing works.

        Validation - we know AI suffers from AI-drift. What checks & balances are in place for retraining and how often. Basically, how do we "set up the guardrails" as part of its management lifecycle.

        1 Reply Last reply Reply Quote Edit 0
        • Chris Johnston
          Chris Johnston @Debra Turnbull last edited by

          @Debra-Turnbull Totally agree Deb! The other thing on my radar at the moment is erosion of critical thinking skills, particularly for clinicians, how can we assess it and mitigate it.

          1 Reply Last reply Reply Quote Edit 0
          • Debra Turnbull
            Debra Turnbull @Chris Johnston last edited by

            @Chris-Johnston Oh my...! NOT something that I thought about...

            Most of the AI tools that I've seen so far have been in the 'assistant' sphere. Actually replacing decision making is something totally different. I've only seen DSS - Decision Support Systems - that are coupled from the backend of an EMR. They merely provide suggestions - and do not replace clinician note taking.

            I guess that an AI override would be something to look out for.

            The next big thing > Agentic AI: an AI tool feeding into another (different) AI tool... that is coupled with another AI tool. That being said - these things are still WAY too clunky.

            1 Reply Last reply Reply Quote Edit 0
            • Chris Johnston
              Chris Johnston @Debra Turnbull last edited by

              @Debra-Turnbull While reading the other night one article I came across was looking at the use of generative AI tools - not for decision making just providing information - in knowledge processing roles in different sectors. They discussed recent findings of how quickly and to what extent people were lulled into simply accepting the AI’s responses, especially when under time pressures, and rapidly reduced any attempts to verify or validate the content, failing to check other sources or identify any skewing or bias. So even if the individual is still ostensibly ‘making’ the decisions, as the process of critical thinking deteriorates or is suppressed, those decisions can quickly become unreliable and steered by the information sourced and presented by the AI tools you use.

              Participants were reported as focusing on feeling efficient and productive, with diminishing concerns for or awareness of inaccuracy which denotes a quickly acquired and misplaced trust in the convenience of the tools, particularly if they’re telling you what you want or expect to hear.

              While it wasn’t specific to healthcare, it looked at evidence across several sectors, and all exhibited similar levels of erosion of critical thinking. I doubt that healthcare would prove any different, we’re all human after all.

              Similarly, there’s been recent evidence that when we abandon the need to retain information, and become reliant upon other mechanisms to store and retrieve it for us, our ability to make connections and intuitive leaps diminishes substantially.

              [I’m not on the right device to pull up the references, but I’ll try to add them tomorrow 🙂 ]

              1 Reply Last reply Reply Quote Edit 0
              • Debra Turnbull
                Debra Turnbull @Chris Johnston last edited by

                @Chris-Johnston
                Hmmm... failure in humans' logical thinking processes.

                I wonder how we could define THAT into the guardrail schema... and test.

                1 Reply Last reply Reply Quote Edit 0
                • 1 / 1
                • First post
                  Last post