Sharing & Discussions / Partage et discussions
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply

    Manitoba to ban social media, AI chatbots for youth — a first in Canada
    2
    5
    5

    This topic has been deleted. Only users with topic management privileges can see it.
    • Paula Orecklin
      Paula Orecklin last edited by

      Paula Orecklin paula.orecklin@gmail.com
      Apr 27, 2026, 10:47 AM (6 days ago)
      to PAN

      https://www.cbc.ca/news/canada/manitoba/manitoba-social-media-age-restrictions-9.7177470

      Hey,

      What do you all think about this? How will it impact kids/teens who are using AI to learn about their health issues?

      It’s also going to really impact everyone using social media to connect with others dealing with the same things.

      There are tons of issues with social media and AI chatbots, and the societal impacts have been (literally, in many countries) revolutionary. But the same platforms that produced ISIL also produced MeToo.

      What do you all think? Is this going to impact young patients’ lives more positively or negatively, or more likely, in a complex combination that won't even be fully understood for decades to come?

      Or is this just totally pointless, and kids are going to get past any blocks easily? I'm concerned that the only people it’ll hurt are the ones who are now scared to tell anyone about problems like harassment - or worse - because they aren’t supposed to be on the platforms in the first place.

      I know this is a wider topic than digital health and AI, but I feel that given that any patients will face these issues, it’s worth discussing.

      What do you all think?

      • from the digital health group's emails
      2 Replies Last reply Reply Quote Edit 0
      • Paula Orecklin
        Paula Orecklin @Paula Orecklin last edited by

        @Paula-Orecklin

        It's worth noting that, since this was published, the premier has gone further, adding that he wants to fine tech companies if they allow use by teens here- and in proportion to their financial size!

        https://www.cbc.ca/news/canada/manitoba/kinew-on-ai-social-media-youth-ban-9.7180688

        But it strongly reminds me of a couple of years ago, when the federal government passed a law to compensate media outlets for their work being shared. You know what they did? It sure wasn't to pay. Instead, they just stopped sharing Canadian news, and all of these local and regional news lost a ton of their viewership.
        https://www.cbc.ca/news/politics/google-canada-online-news-1.6892879

        Like, I support Kinew, and it's probably a good move psychologically for teens, but we're a province of 1 500 000. We do not have the kind of pull to make Meta or Alphabet give a crap about our userbase. I won't be surprised if they just yank social media from all of us.

        I can only hope that the eventual result now is as decent as the one Google and the federal government hashed out eventually. Google agreed to pay 100 million a year (tied to inflation) to a non-profit to dole out to news outlets across the country, run by a group of independent* news orgs. https://globalnews.ca/news/10553090/google-compensation-canada-online-news-act/

        But we have far less pull, demographically, financially, or politically, than Canada as a whole does,,,,,and it took months for this agreement. And this time, the fight would be with all of the big companies, not just Google alone. Even worse, it's with all of the social media giants, and the AI ones.

        *I don't know how independent they actually are, and which are run by mass media conglomerates or anything, and feel way too bad to care enough and look into who's actually involved.

        2 Replies Last reply Reply Quote Edit 0
        • Debra Turnbull
          Debra Turnbull @Paula Orecklin last edited by

          @Paula-Orecklin
          We can not rely on tech companies to police social issues - they have failed.

          CTV - 764 nework and Tumbler Ridge as precedence for harms.

          Personally, I applaud your premier.

          1 Reply Last reply Reply Quote Edit 0
          • Debra Turnbull
            Debra Turnbull @Paula Orecklin last edited by

            @Paula-Orecklin
            I wanted to see what CAMH's position on this was, and found the following:
            CAMH: GGTU.
            It looks like it's aimed at therapists supporting kids, however, it references a lot of the research papers.

            CAMH - Ontario's Center for Addiction and Mental Health.
            GGTU - Gambling, Gaming and Technology Use program at CAMH

            1 Reply Last reply Reply Quote Edit 0
            • Debra Turnbull
              Debra Turnbull @Paula Orecklin last edited by

              I think, first of all, it is a mistake to mix up Social Media and AI. These are 2 different types of algorithyms - performing 2 completely different functions; with different purposes. Mixing the two together is adding to the confusion -> and in turn more mis-information.

              Social media support groups have been a positive. Finding people that suffer from the same things you do, exchanging ideas on what is helpful or not, recommending local in-person support groups, locating information - these are useful activities. The flip-side would be social media trolling, the promotion of misinformation,... these are the negative outcomes.

              The key is for the user to be able to discern: the good from the bad; the credible from misinformation. The problem here is the developing brain - the teen brain. The child/teen-developing brain does not have the capacity to think critically.) This is why we are seeing such horror stories involving social media and kids. Like the point made by the grieving father in the article - kids need to be of a legal age to : drive, drink, vote... why not begin using social media? The evidence of harm is piling up. We know this - let's stop it.

              Now, when it comes to AI chatbots, we know that AI-drift is a thing. Large language models (LLM) will change over time. (It has something to do with the original training data that no longer reflects the current real-world-circumstances data.) This shows up as bias; which means that the model is degrading. There is a lot of talk around guardrails and oversight, but how and who will do this... ? I suspect many eyes will be needed, and this is where public reporting will come in... hence, public consoltations. But here's the thing: will the public know what to look for?

              1 Reply Last reply Reply Quote Edit 0
              • 1 / 1
              • First post
                Last post