A major social media platform is expanding its default teen safety settings across the European Union and the United Kingdom, automatically placing younger users into a more restricted experience designed to reduce unwanted contact, limit exposure to sensitive content, and make late-night use less likely. The move comes as regulators in both jurisdictions push tech companies to prove that protections for minors are built into products by design—not added later as optional tools.
What is changing for teen accounts
The rollout extends “teen account” protections—previously introduced on one flagship app—across additional services within the same platform group and makes key safeguards the default for under-18 users. While details vary by product, the safety package generally focuses on privacy, messaging controls, content filtering, and usage nudges that encourage healthier routines.
- Private-by-default profiles for teens, reducing visibility to unknown users.
- Stricter messaging rules, limiting who can contact teens and reducing unsolicited requests.
- Lower exposure to sensitive content through tighter default content controls in feeds, search, and recommendations.
- Time and sleep prompts such as reminders, quiet hours, or “bedtime” style notifications aimed at late-night scrolling.
- Extra friction for risky changes, with younger teens typically needing parental approval to relax certain settings.
For families, the practical impact is that many of the settings parents previously had to hunt for—or did not know existed—are now enabled automatically. Older teens may still be able to adjust some preferences, but accounts for younger teens are expected to require additional consent to reduce protections.
Why the EU and UK are a key battleground
The timing is not accidental. In the EU, the Digital Services Act (DSA) has increased pressure on large platforms to assess and mitigate systemic risks, including harms to minors. The European Commission has also promoted stronger child-protection practices, including guidance on age-appropriate design and approaches to age assurance.
In the UK, the Online Safety Act is reshaping expectations for how platforms handle illegal content and material considered harmful to children. UK regulator Ofcom has signaled that services popular with minors will face close scrutiny—creating strong incentives for platforms to demonstrate measurable safeguards that work by default.
Regulators increasingly want “safety by default” rather than “safety by settings,” meaning protections should be on automatically, especially for minors.
What it could mean for teens in Germany
For users in Germany and elsewhere in the EU, the expansion is likely to be felt in day-to-day interactions: fewer messages from unknown accounts, more conservative content recommendations, and clearer prompts that encourage breaks—particularly at night. For many teens, it may also become harder to quickly switch to more public or permissive settings without additional steps.
German youth-protection advocates have long criticized the gap between “available” controls and “used” controls. Default settings may narrow that gap, but the results will depend on enforcement: how consistently the platform identifies teen users, how it responds to suspected underage accounts, and how easy it is to bypass protections.
Age assurance and enforcement remain the hard part
Default settings only work if platforms can reliably determine who is a teen. Companies increasingly combine user-provided birthdays with signals such as account behavior, network patterns, and content cues to flag suspicious ages. Some also allow appeals or verification paths for users who believe they were misclassified.
However, age assurance is contentious in Europe because stricter verification can collide with privacy expectations and data-minimization principles. Policymakers are trying to find a middle ground: more reliable age checks without forcing every user to hand over sensitive documents.
What parents should look for
Parents and guardians in the EU and UK can expect the new defaults to appear automatically over time, either through an app update or a server-side rollout. Typical signs include a notification about a “teen account” experience, refreshed privacy settings, or new supervision options.
- Check whether the teen’s account is set to private and who can message them.
- Review sensitive content controls and whether search and recommendations are limited.
- Look for quiet hours, break reminders, or nighttime prompts that can reduce late use.
- If offered, enable parental supervision features that show basic activity insights without reading private messages.
Bottom line
The expansion of teen safety defaults across the EU and UK marks a clear shift in how major platforms respond to public and regulatory pressure: moving protections from optional settings to automatic safeguards. Whether it meaningfully reduces harm will depend on consistent enforcement, transparent reporting, and how effectively platforms prevent adults or older teens from slipping into minor accounts—or minors from slipping out of protections.
