In a world shaped by digital media, children are more connected than ever. Yet, these connections come with a cost—lowered attention spans, exposure to inappropriate content, and a loss of childhood innocence. By understanding these risks, many state governments in the United States have come forward with proposals to create stricter limits on social media use for people under the age of 18. The use of AI age authenticators, setting limits on screen time, and removing most or entire access to Not Safe For Work (NSFW) content, have all been proposed to make a healthier and safer environment for children online. Although some measures to protect children from the dangerous digital age have been criticized for being “unreasonable”, these efforts are a necessary response to an urgent public crisis.
Today, many people use social media as a commonplace to communicate, entertain, and connect. However, its intricate design creates addictive habits that can damage the minds of the youth. Studies published in the International Journal of Mental Health and Addiction, have shown that excessive screen time results in lowered attention spans, poor academic performances, and increased chances of mental illnesses such as anxiety or depression. The American Academy of Pediatrics clearly states that adolescents should have “no more than two hours of recreational screen time daily”, but this is often not the case, as a 2023 Gallup survey showed that many teens report spending up to 7 hours a day on their socials alone.
In addition to the negative impacts of social media in general, platforms often are filled with harmful or age-inappropriate content, which includes explicit images or videos, cyberbullying, vulgar language, and misinformation. Adolescents are quite vulnerable to this content; according to Heidi Moawad’s research published on VeryWellHealth, many pre-teens’ and teenagers’ cognitive and emotional regulation systems are still in development. Right now, with our current attempts at a “laissez-faire” approach to regulate social media by U.S. government regulators and tech companies such as Meta and TikTok, have been inadequate. Without clear and impactful intervention, today’s digital media will continue to rip apart and exploit young users, destroying their mental health and developmental integrity.
In order to remedy the disastrous impact of social media on adolescents, state governments should be proposing a three-step plan to mitigate and address these core issues with social media’s impact on the youth.
- Verifying age through AI Facial Authentication
- Screen time limitations
- Restriction of NSFW Content
Age verification has a common policy already implemented by many state governments. Traditional methods, like self-reporting birth dates, are easily bypassable. AI facial recognition gives a stronger option, as biometric data must be analyzed to confirm a user’s age. However, there are still questions for its reliability, whether makeup could bypass the recognition software or if individuals with youthful faces might be misclassified and locked out. Many critics argue that this kind of technology creates many privacy concerns. Nonetheless, many proponents point out methods like edge computing, where data is processed locally, or encryption to protect stored data. After all, this form of authentication ensures that platforms enforce under-18 policies successfully, keeping young users from creating accounts that let them view restricted content.
The second key proposal is creating mandated screen time limits for minors. Social media platforms would use activity tracking in order to enforce such limits, pausing and preventing access once a young user’s daily quota is reached. These caps are intended to get rid of the large overuse of social media, encouraging youth to spend more time doing offline activities that promote cognitive development and physical activity, rather than “doom-scrolling” mindless content on social media.
Perhaps the most disputable proposal is a mandate for platforms to either reduce significantly or eliminate NSFW content accessible to minors altogether. Currently, such measures have been implemented on social media platforms like TikTok and Instagram, however sleek forms of NSFW content bypass these filters. To combat this, AI-powered content filters could block explicit material in order to create a digital world that prioritizes the safety and the importance of age-appropriate engagement. Although it might not be possible to fully eradicate harmful content, these methods will significantly reduce its accessibility in order to protect the youth’s innocence from material that could be considered as “damaging”. While these proposals are intended to keep children safe, they may spark up critics who would raise concerns about implementation, privacy, and people’s freedom. Past experiences with AI implementation for age verification argue that it may infringe on privacy, as facial recognition technology could be misused and create security vulnerabilities. To avoid this, legislators must make sure that all biometric data collected is encrypted, anonymized, and stored for as long as needed.
Furthermore, opponents of stricter social media limits argue that government-enforced familial screen times can invade family life, bringing to light the government’s interference with parental authority to make their own decisions for their children. However, this is myopic compared to the government’s perspective of a decision made for the greater good of the general population. For example, just as the government intervenes in the use of harmful substances associated with adult-based activities like smoking and drinking, so too does it have the right and responsibility to minimize potential harm from unregulated social media engagement.
The issue of social media and children is only a part of a broader social worry. For example, when assessing the progress of the nation, what future do we foster for ourselves? Youth are often vulnerable and susceptible to the outside world. Thus, this stage should be a time of exploration with an enhanced awareness of the world and social-emotional learning through appropriate educational and developmental opportunities. Unfortunately, adolescents spend more time doom scrolling during the day than they do sleeping at night, and social media takes away that vulnerability, innocence, and filtered creativity. The implementation of these social media limits isn’t about restricting freedoms; it’s about finding a balance where government intervention can be used to help navigate the complexities of the digital world in a safe and efficient manner. Social media platforms profit heavily from massive amounts of engagement, leaving them with no incentive to self regulate screen time limits. Therefore, government regulation is crucial to create the right balance between what businesses want and protecting the youth.
State policy efforts to protect our children from excess exposure and subsequent mental health issues have recently come to fruition to regulate social media for minors. No policy is ever perfect. However, the inclusion of AI for age verification, required screen time, and restricted exposure is easily translatable from an already existing policy to ensure less excess use and hazard potential. Thus, such policies take the current trajectory of development and place it in line with something better and healthier for children’s digital reality by recognizing their innocence and psychological realities. However, such policies need practical implementation, collaboration with social media companies, and continual national and international discourse. Ultimately, such policies depend upon a cultural shift that cares more about young users than the potential profit that big tech could amass. It’s a moral obligation, beyond policy-driven incentives, to lobby for this cause for young minds and lost innocence.