#0179: Kids and Smartphones
Why the problem with kids and smartphones is software, not hardware
Tags: braingasm, kids, smartphones, social-media, regulation, identity, verification, 2025
Photo by Jon Tyson on Unsplash
[ED: This is another hot topic of mine that I have been thinking about for a long time. It infuriates me that as a society, we have simply thrown our collective hands in the air when it comes to toxic social media and its impacts on kids. 1 out of 5 hats.]
Braingasm
As a nearly 30-year veteran of the tech industry, with approximately 20 years in Australia and 10 years in the UK, I’ve witnessed multiple tech revolutions: from the early web in the ‘90s, Web 2.0 and smartphones in the 2000s, Web 3.0 in the 2010s, and now the AI and generative AI disruptions of the 2020s. To say the least, I have been at the forefront and personally experienced a lot of technological disruption over the past three decades.
The ongoing debate about kids and smartphones infuriates me because it misses the core issue: this isn’t a hardware problem; it’s a software problem.
A Personal Perspective
This isn’t just an academic or professional concern for me. I’ve had very personal experiences with this topic when both of my children faced social media bullying in high school. The details aren’t important, but suffice to say that I saw first-hand how damaging online behaviour can be—behaviour that, had it happened in the real world, would have carried serious legal interventions.
These experiences have made it abundantly clear to me that our current approach to regulating online spaces, particularly as they affect young people, is fundamentally broken.
Hardware vs Software
The focus shouldn’t be on whether children have access to smartphone hardware, but rather on their access to certain types of software. There is a safe level of hardware use for children, with many beneficial applications. No, kids shouldn’t be on devices 24/7, but there is certainly a healthy balance to be struck.
The real danger lies in specific types of software, particularly social media platforms like TikTok, Instagram, Facebook, Twitter (now “X”), and Snapchat. Research by Jonathan Haidt supports the argument that these apps have no safe level of use for children.
The Social Media Problem
Social networks consistently claim they “can’t” address issues with toxic content being targeted at children. This argument is entirely specious. These companies possess the most powerful AI systems in the world, along with all the data needed to train models that could protect rather than harm young users. They simply don’t want to implement these safeguards because they know that engagement—however toxic—is what drives revenue.
Consider the irony: these are the same companies that can target advertisements with uncanny precision, predict user behavior with remarkable accuracy, and filter content when it suits their business interests. Yet somehow, when it comes to protecting children, they claim their hands are tied by technical limitations. This is disingenuous at best and deliberately misleading at worst.
A Straightforward Solution
So, what’s the solution? It’s simple, yet often overlooked: social media sites should be required to identify users similarly to how banks verify their customers under KYC/AML regulations. This doesn’t mean users must publicly post their real names, but they should be known to the site operators and verified through regulated ID checks.
To be absolutely clear: users would not need to be public with their real identity. That information would be known only to them and the site (or potentially an independent identity verification service). There might even be a case for creating an independent identity verification service that abstracts the real identity from the site identity using strong cryptographic protections. This level of indirection would also help smaller social networks, which could hand off that identification process to third-party providers (for a fee).
This approach solves three critical issues:
- It restricts site use to those society deems mature enough to handle online interactions healthily
- It ensures that toxic behaviour can be traced back to the real individual, allowing for appropriate sanctions when necessary
- It would decimate the use of illegal bots used to influence society and manipulate public discourse
The Real Obstacle: Profit, Not Technology
Social media companies will argue that implementing proper age verification and content moderation is impossible, but this claim is both self-serving and technically incorrect. These companies already employ sophisticated AI-powered targeting technology that could easily be repurposed to implement such constraints. They already use these systems to power the algorithmic feeds that keep users engaged (and advertised to).
They resist because such measures would reduce active users and engagement, directly impacting profits. Nothing more, nothing less.
Proportional Regulation and Legal Enforcement
It’s important to note that these identification requirements should only apply above some prescribed level of activity—perhaps measured by monthly active users. The threshold at which regulation kicks in should be carefully calibrated. It’s neither necessary nor desirable to apply these constraints to small sites, as doing so would stifle innovation.
In fact, we want to incentivise small sites to innovate with social media to break us out of the hold of the mega data monopolists. A regulatory framework that applies only to platforms above a certain size would create a more diverse ecosystem.
As for legal enforcement, the principle is straightforward: doing something illegal online should be no different from doing something illegal in the real world. The medium shouldn’t matter. If someone engages in activity that requires sanction, it should be possible—under a properly constituted court order or legal action—to tie the illegal online activity back to the real human responsible.
Without such legal orders, the mapping between an online identity and a real human would remain opaque, but if one breaks the law, that link should become accessible to law enforcement. This would also effectively eliminate the use of illegal bots to influence society. It’s not illegal to operate a bot or an automated service, but there must be a legal human identity that ties back to its ownership. This approach would kill illegal political bots in a heartbeat.
Time for Action
Given the well-documented detrimental effects of toxic social media on young people, especially girls aged 12-18, it’s baffling that governments haven’t enforced stricter regulations. We will likely look back on the 2010-2025 period with shame for allowing a few tech giants to harm society so massively and unchecked.
As Haidt aptly puts it, the last 20 years have been a period where we “overprotected children in the real world while underprotecting them in the virtual world.”
Action is needed, and it starts with fixing the software, not the hardware. The technology to protect our children already exists—what’s missing is the will to implement it.
The choice is clear: we can continue to let profit-driven companies dictate how our children interact with technology, or we can demand accountability and appropriate safeguards. As both a technology professional and a parent, I believe the time for meaningful regulation is long overdue.
Regards,
M@
Reference: The Anxious Generation by Jonathan Haidt
[ED: If you’d like to sign up for this content as an email, click here to join the mailing list.]
First published on matthewsinclair.com and cross-posted on Medium.
hello@matthewsinclair.com | matthewsinclair.com | bsky.app/@matthewsinclair.com | masto.ai/@matthewsinclair | medium.com/@matthewsinclair | xitter/@matthewsinclair