Sharing & Discussions / Partage et discussions
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply

    A good read on AI by Will Falk
    5
    11
    30

    This topic has been deleted. Only users with topic management privileges can see it.
    • Alies Maybee
      Alies Maybee last edited by Alies Maybee

      Will Falk's new piece on AI: A Complement, Not a Substitute: Generative AI’s role in Canadian Healthcare in 2026
      https://www.csagroup.org/article/public-policy/a-complement-not-a-substitute-generative-ais-role-in-canadian-healthcare-in-2026/

      What do you think?

      3 Replies Last reply Reply Quote Edit 0
      • Chris Johnston
        Chris Johnston @Alies Maybee last edited by Chris Johnston

        @Alies-Maybee

        While I don't argue with the headline message, I have many concerns about the detailed content. But the one that really jumps off the page is the exercise on page 43. Asking people to submit the PDF framework to an AI with their questions is an exercise in skewing the field. The more times a single document is uploaded by different users (different IP addresses essentially), the greater weight it will carry, until it assumes massive influence. Asking people to add their professional bios adds even more weight, since essentially it will be be construed as promoting and endorsing the content without them realizing it. Whether this is an intentional act or not, it undermines the author's credibility: If he's aware of the implications, he shouldn't suggest it, if he's not aware of it, how well does he really understand AI?

        I'm more than a little shocked by the CSA's involvement - although their disclaimer seeks to distance themselves a little from the author, but they should know better.

        1 Reply Last reply Reply Quote Edit 0
        • Alies Maybee
          Alies Maybee @Chris Johnston last edited by

          @Chris-Johnston interesting. Will is considered something of an expert in this area and is closely associated (on the board) of AMS. Many listen to and pay attention to him.

          1 Reply Last reply Reply Quote Edit 0
          • Debra Turnbull
            Debra Turnbull @Alies Maybee last edited by

            @Alies-Maybee

            It was a long read... need time to digest.

            But then, this summary showed up:
            Gen AI already a major Component of Care Delivery

            1 Reply Last reply Reply Quote Edit 0
            • Chris Johnston
              Chris Johnston @Alies Maybee last edited by

              @Alies-Maybee

              That makes it even more concerning 😞

              1 Reply Last reply Reply Quote Edit 0
              • Debra Turnbull
                Debra Turnbull @Chris Johnston last edited by

                @Chris-Johnston @Alies-Maybee
                I don't have a problem with complementary GenAI. This is the human-in-the-loop version.

                For me, substitutive GenAI is a NO GO. This is the autonomous or Agentic AI. We have a rather famous privacy breach case here in Ontario - you can read the OIPC's response . You may need to download the PDF.

                @Robert-Wells BTW Rob, this was a case of "Otter. AI"... system run amock...!

                After 3 days in an AI in Med 2026, Krakow, Poland conference, there are valid economic reasons for deploying AI. The doctor shortages are real here in Canada and in the EU. This will not get better.

                We need to protect our clinicians - by giving them the tools they need to make their jobs easier. If that means relieving some of their cognitive load, by reclaiming some admin time (transcription to EMR) - so be it.

                AI Scribes are flawed - they make errors - we know this. So put the human in there to make the corrections when they happen.

                1 Reply Last reply Reply Quote Edit 0
                • K
                  Kim Locke @Debra Turnbull last edited by

                  @Debra-Turnbull What happened to transcription processes with humans?

                  1 Reply Last reply Reply Quote Edit 0
                  • Debra Turnbull
                    Debra Turnbull @Kim Locke last edited by

                    @Kim-Locke
                    Do you mean this guy:
                    Scribe_1.png

                    Or are you referring to present day?
                    Current processes in the healthcare sphere, involve the clinician reviewing the transcript - prior to copy-&-paste text into the EMR (electronic medical record). This is in healthcare and not in commercial businesses.

                    In the case of the doctor in Ontario that had the OtterAI product; linked into his email list - and didn't think about what would happen when he left the hospital - the system did what it was designed to do - automatically distribute the transcript to all parties. Major privacy breach. (There were also major hospital process that were not followed after the physician's departure.) Hence, why "new" proposals insist on disabling automated processes, placing human oversight in the process, and especially before the notes are copied into the EMR. (There are further nightmare stories when patients try to correct erroneous information in their patient record but that discussion is for another time.)

                    Bottom line - there needs to be a human "reviewer" in the process.
                    Second point - the "automated" aspect of the AI Scribe can not be present in the software. Get rid of substitutive AI Scribes; chose only complementary Scribes.

                    1 Reply Last reply Reply Quote Edit 0
                    • K
                      Kim Locke @Debra Turnbull last edited by

                      @Debra-Turnbull Hmmm....I was thinking more along the lines of the medical professional literally saying his notes into a Dictaphone, and then someone (a student?) transcribing what he has said.

                      1 Reply Last reply Reply Quote Edit 0
                      • Debra Turnbull
                        Debra Turnbull @Kim Locke last edited by Debra Turnbull

                        @Kim-Locke
                        Oh! The transcription houses - yes, those still exist (from what I hear). The transcriptionists are professionals; classified as "agents" (by the government). Physicians purchase these services - there are confidentiality agreements, contracts, etc. There is no AI in this case.

                        These humans are the process. The concept of "human-in-the-loop" is about including a human in the AI process. With regards to AI Scribes, that means the physician (human) is included in the transcription process (as reviewer). The goal is for the reviewer-physician to catch the errors that the AI-scribe makes.

                        They do make a lot of errors - even before the "hallucination" (I hate that word) phase.

                        BTW, this is one of the more popular transcription softwares out there:
                        Dragon logo_2.png

                        1 Reply Last reply Reply Quote Edit 0
                        • J
                          Jenna Kedy 0 @Alies Maybee last edited by

                          @Alies-Maybee I think the strongest part of the piece is that it acknowledges a reality a lot of people including myself as a frequent flyer patient in healthcare already know: most patients and clinicians are using AI anyway. Not because healthcare workers are lazy or patients want robots instead of humans but because the system is overloaded. Patients use AI to understand symptoms and clinics use it to reduce documentation burden so the “complement, not substitute” framing makes sense to me. The best-case version of healthcare AI isn’t replacing human connection as it’s creating more room for it. I also think the report is right that moving too slowly has risks too. If healthcare systems ignore AI completely while patients like me and providers adopt it independently, you end up with less oversight, less guidance, and more inequity. That said, I still think transparency and consent have to stay central. Especially for marginalized communities who already experience harm or dismissal in healthcare systems so overally cautiously optimistic. The technology itself isn’t automatically good or bad as it depends on who builds it, how it’s governed, and whether it actually improves patient care in real life!

                          1 Reply Last reply Reply Quote Edit 0
                          • 1 / 1
                          • First post
                            Last post