Instagram has announced a new feature that will notify parents if their teenage children repeatedly search for terms related to suicide or self-harm in a short period. This move comes amidst growing pressure for governments to adopt measures similar to Australia’s ban on social media use for individuals under 16.
Under this initiative, Instagram, which is owned by Meta Platforms Inc., will send alerts to parents who have opted for the supervision setting if their children attempt to access content related to suicide or self-harm. Starting next week, parents in Canada, the United States, Britain, and Australia will receive these notifications.
The platform emphasized that these alerts are part of their ongoing efforts to safeguard teenagers from potentially harmful content on Instagram. They enforce strict policies against any material that promotes or glorifies suicide or self-harm. Instagram’s current policy involves blocking such searches and directing users to support resources.
Governments worldwide are increasingly focusing on safeguarding children from online harm. Concerns have been raised, particularly following the emergence of the AI chatbot Grok, which has been involved in creating unauthorized sexualized images. In response, countries like Britain and Australia have considered implementing restrictions to protect children online. Spain, Greece, and Slovenia have also expressed interest in limiting access to certain online content in recent weeks.
Instagram has introduced “teen accounts” for users under 16, requiring parental consent to modify settings. Parents can opt for an additional layer of monitoring in collaboration with their teenagers. These accounts restrict young users from viewing “sensitive content,” such as sexually suggestive or violent material.
