Sharing & Discussions / Partage et discussions
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply

    Otter.AI/What do you think?
    5
    9
    46

    This topic has been deleted. Only users with topic management privileges can see it.
    • R
      Robert Wells last edited by

      Hi:

      Recently I was in a Zoom meeting and one of the participants had deployed otter.AI ([https://otter.ai/] to attend the meeting autonomously to take notes. She was called upon in the meeting and the "bot" answered on her behalf.

      I am really not sure what to think about this. A useful tool, but what about consent?

      What do you think? Comments please.

      Rob

      3 Replies Last reply Reply Quote Edit 0
      • Chris Johnston
        Chris Johnston last edited by

        I share your concerns Robert. In my experience, bots are either condoned or rejected depending on the nature of the meeting. Researchers particularly dislike them since it puts their unpublished research at risk.

        Usually, the bot introduces itself with an opt-out for attendees, though it's not clear how reliable that is to be honest.

        As a person with cognitive issues, I can see the benefits for the user, though I'd still want to attend personally, otherwise how would I know if it had hallucinated or misinterpreted parts of the discussions. I wouldn't want it to reply on my behalf and wouldn't feel comfortable accepting responses from bots on behalf of others.

        As a participant, I have reached out on a couple of occasions to project leads to check if the person deploying had sought permission, and if any consideration to consent has been given. On one occasion they had and there was an adequate rationale. On another it triggered a discussion at the next meeting, and bots were voted to be excluded unless prior permission was granted. In both instances permission covered note-taking only and not direct responses from the bot.

        Definitely it's something to watch for the future, since their use is noticeably increasing.

        1 Reply Last reply Reply Quote Edit 0
        • Alies Maybee
          Alies Maybee @Robert Wells last edited by

          @Robert-Wells Yikes! We might get to the point where bots are the only attendees at this rate.

          1 Reply Last reply Reply Quote Edit 0
          • J
            Jennifer DiRaimo - MacWhirter last edited by

            At my workplace, someone on the team used the software and invited all participants to request the notes after a meeting. Now, whenever any of us are in a meeting, it automatically pops up on all our virtual meetings, and it's difficult to get rid of it. It's also strange because it says your notetaker is on the call even before you have logged in! I have no idea half the time it's even on when I am on a call. I didn't realize it responds to others when you are not there. I know it sends automatic messages in the chat, but usually they are general about requesting the notes. It does summarize meetings well - including highlighting action items.

            1 Reply Last reply Reply Quote Edit 0
            • Debra Turnbull
              Debra Turnbull @Robert Wells last edited by

              @Robert-Wells
              I managed to snag this from OntarioMD's webpages:

              AI_OMD 2026-01-06.png

              The Information and Privacy Commissioner (IPC) of Ontario is aware of the above case - not sure if was Otter.AI. It is a perfect example of consequences to an 'agentic AI' tool.

              Great for taking notes; NOT designed for healthcare... or other privacy needed environments.

              1 Reply Last reply Reply Quote Edit 0
              • R
                Robert Wells last edited by

                Thanks for the insights! It does come down to consent and good practices by the meeting organizer/chair.

                It was quite a surprise for me, how can we equip our members with the knowledge/tools to best handle this situation when it arises?

                1 Reply Last reply Reply Quote Edit 0
                • Debra Turnbull
                  Debra Turnbull @Robert Wells last edited by

                  @Robert-Wells
                  I guess, the way to think about this is - there are 2 types of AI Scribes: one for commercial (consent not needed) and healthcare - where consent IS NECESSARY. The legal ramifications in each context is going to be different.

                  I read a comment on how someone 'loved' their AI Scribe and how it took over note-taking responsibilities. They were then able to focus on the conversation. Think about it - in a marketing meeting or a technical meeting - this would be advantageous. The parties discuss points - rather than recall & scribbling notes.

                  In the case of healthcare delivery - privacy legislation, and the clinician/patient information exchange is very precise. When a doctor walks into the room with a patient, there is a mandate on the part of the doctor to obtain key pieces of information (and vice versa). Symptoms, analysis, diagnosis - these are info constructs on which medicine practice is based. BTW - I'm not a doctor... but am fascinated by how these things are represented digitally = i.e. data. Then, there is the legal/ethical aspect of obtaining consent.

                  In the commercial realm consent is not needed... unless it gets interpreted as spying - honestly, I don't know what the legal consequences are.

                  In healthcare - consent is a MUST! else, the clinician is in breach of duty to their license, college and oath.

                  1 Reply Last reply Reply Quote Edit 0
                  • Alies Maybee
                    Alies Maybee @Robert Wells last edited by

                    @Robert-Wells I have been thinking about this. When doctors started using EMRs instead of paper records, we were not asked to consent. I suspect after the initial burst of use with consent, that AI Scribe will be seen in the same way as the EMRs and no consent required.

                    Thoughts?

                    1 Reply Last reply Reply Quote Edit 0
                    • Chris Johnston
                      Chris Johnston @Alies Maybee last edited by

                      @Alies-Maybee @Robert-Wells

                      I think that’s very much the line that vendors would like us to believe - as well as many enthusiastic clinicians. I’ve heard variants including - we didn’t need consent for fax machines / email / billing software etc. - so we shouldn’t need it for AI.

                      That might be true if AI was handling just background administrative data, or if it was static in terms of capacities and abilities- but it’s not. Already we’re seeing AI behave in unpredictable ways, and as agentic capacity develops it becomes more problematic not less. EMRs don’t make decisions, fax machines don’t choose what content to send or where - but increasingly AI can and does.

                      As patients, consent is probably our only significant bargaining chip in this game, and instead of simply surrendering, we should be looking to leverage it for better access, better control, better regulation, better safeguards and better care.

                      1 Reply Last reply Reply Quote Edit 0
                      • 1 / 1
                      • First post
                        Last post