Current Affairs Topics Archive
International Relations Economics Polity & Governance Environment & Ecology Science & Technology Internal Security Geography Social Issues Art & Culture Modern History

Amid wave of kids’ online safety laws, age-checking tech comes of age


What Happened

  • Three months after Australia launched a landmark ban on social media accounts for users under 16 — which came into effect on December 10, 2025 — regulators across Europe, Brazil, and several US states are now moving to enact similar measures.
  • More than 4.7 million social media accounts judged to be held by under-16s were deactivated, removed, or restricted in Australia within the first month of implementation.
  • The Australian model requires social media platforms to take "reasonable steps" to verify user age — including facial estimation via selfie, uploaded ID documents, or linked bank details — and prohibits sole reliance on self-declaration.
  • Age-verification technology companies have seen a surge in regulatory interest, though serious concerns persist about the accuracy, privacy implications, and circumvention of these systems.
  • A "Ringfence and Destroy" data protocol in Australian law requires that data collected solely for age verification must be segregated from platform advertising algorithms, recommendation engines, and user profiling systems.

Static Topic Bridges

Australia's Online Safety Amendment (Social Media Minimum Age) Act, 2024

Australia's Online Safety Amendment (Social Media Minimum Age) Act 2024 is the world's first national legislation imposing a minimum age requirement for social media platform access, enacted in November 2024 and enforceable from December 10, 2025. It amends the Online Safety Act 2021 and requires platforms to implement systems preventing users under 16 from creating or maintaining accounts. Platforms that fail to take "reasonable steps" face civil penalties of up to AUD 49.5 million per corporation (150,000 penalty units).

  • Platforms in scope: Facebook, Instagram, Snapchat, Threads, YouTube, TikTok, X (formerly Twitter), Reddit, and others designated by the eSafety Commissioner
  • Under-16s and their parents face no penalties — enforcement falls entirely on platforms
  • Verification methods required: "Successive validation" or "waterfall" approach — platforms must use at least one technology-based method before resorting to self-declaration
  • Section 63F: "Ringfence and Destroy" — age-assurance data must be technically segregated and cannot be used for advertising, profiling, or algorithmic recommendations
  • Outcome 3 months in: 4.7 million+ accounts deactivated; widespread circumvention reported (teens using parents' biometrics, fake ages, VPNs)

Connection to this news: Australia's law has become the global reference model for age-based social media restrictions, with the verification technology ecosystem and regulatory framework it mandates now being studied and adapted by jurisdictions across Europe, Latin America, and Asia.


Age-Verification Technologies — Types, Accuracy, and Privacy Trade-offs

Age verification for digital services is an emerging technology domain that seeks to confirm a user's age or age range without necessarily disclosing identity. Technologies range from government ID document upload to biometric facial analysis to third-party digital identity tokens. Each approach involves a different balance of accuracy, privacy risk, cost, and accessibility.

  • Document-based verification: Upload of government ID (passport, driving licence, Aadhaar equivalent); high accuracy but requires sharing sensitive identity documents with platforms or third-party processors
  • Facial age estimation: AI models estimate age from a selfie; accuracy varies — most systems are accurate within ±2–3 years for adults but less reliable at the 14–18 boundary; one Australian teen reportedly bypassed the system using a dog's photo, illustrating current limitations
  • Device-based signals: Parental controls, operating system-level age settings (Apple, Google both offer these)
  • Linked financial credentials: Bank account linkage as age proxy (bank accounts require age verification); high accuracy but excludes unbanked users
  • Privacy concern: Biometric data (facial scan) is among the most sensitive personal data; normalising facial verification for everyday online services could erode privacy norms
  • The UK Age Appropriate Design Code (Children's Code, 2021) is an alternative regulatory approach that imposes data minimisation and safety-by-default standards on platforms, rather than age verification at entry

Connection to this news: The imperfections in current age-verification technology — both accuracy failures and privacy costs — form the central tension in the global debate, as regulators in Europe and Brazil model their own laws on Australia's framework while grappling with the same unresolved technological limitations.


India does not yet have a standalone social media minimum age law equivalent to Australia's Act, but the regulatory landscape is evolving rapidly. The Digital Personal Data Protection Act, 2023 (DPDPA) includes specific provisions for children's data, defining a "child" as a person below 18 years, and requiring "verifiable parental consent" before processing a child's personal data. The Act also prohibits processing of children's data for targeted advertising and behavioural monitoring.

  • Digital Personal Data Protection Act, 2023: Enacted August 2023; Rules under finalisation as of early 2026
  • "Verifiable parental consent": DPDPA mandates platforms to obtain verifiable consent before collecting a child's data, but Rules have not yet specified the verification mechanism
  • IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021: Require significant social media intermediaries (5 million+ users) to implement a complaints mechanism but do not impose age verification obligations
  • Information Technology Act, 2000 (Section 67B): Prohibits transmission of sexually explicit content involving minors online — but this is a content prohibition, not an access restriction
  • National Cyber Safety and Security Standards: Under development, covering age-appropriate design principles
  • India's DPDPA approach mirrors the UK Children's Code more than Australia's outright ban — focusing on consent and data minimisation rather than platform-level access restrictions

Connection to this news: India's DPDPA provides a legal foundation for child data protection, but the government must still operationalise the definition of "verifiable parental consent" through Rules — the global wave of age-verification legislation directly informs what technical and legal standards India is likely to adopt.

Key Facts & Data

  • Australia's Social Media Minimum Age Act: Enforceable from December 10, 2025
  • Minimum age threshold (Australia): 16 years
  • Platforms in scope (Australia): Facebook, Instagram, TikTok, Snapchat, YouTube, X, Reddit, Threads
  • Accounts deactivated/restricted in first month (Australia): 4.7 million+
  • Maximum platform fine (Australia): AUD 49.5 million per corporation
  • Countries considering similar laws (as of February 2026): France, UK, Malaysia, Germany, Italy, Greece, Spain, Brazil
  • India's DPDPA (2023): Defines child as below 18; requires verifiable parental consent
  • UK Children's Code (Age Appropriate Design Code, 2021): Data minimisation + safety-by-default approach
  • Facial age estimation accuracy: ±2–3 years for adults; less reliable at adolescent boundary
  • "Ringfence and Destroy" protocol (Section 63F, Australia): Age-assurance data must be segregated from platform's commercial data systems