US House passes Kids Online Safety Act
The House passed KOSA, sending the child safety bill to the Senate amid First Amendment debates.
Kids Online Safety Act passed the US House today in a 256 to 168 vote that sent a shiver through every corporate law office and free speech nonprofit in Washington. The bill, officially designated H.R. 7891 in this chamber, now heads to a fraught conference negotiation with the Senate version that cleared that body in July 2024 by an overwhelming 91 to 3 margin. The speed is breathtaking for anyone who has watched tech regulation crawl through Congress for the last decade.
The Vote That Shook Silicon Valley (Again)
The floor debate lasted barely four hours. Republicans and Democrats traded the same talking points they have rehearsed since 2022 when the Kids Online Safety Act first emerged from the Senate Commerce Committee. But the final tally tells a more complicated story. 132 Republicans and 124 Democrats voted yes. 97 Republicans and 71 Democrats voted no. That cross party coalition is rare for any major tech bill, and it signals something deeper: the parental anxiety machine is now the most powerful lobby in town.
"This is the most significant children's safety legislation in a generation. It puts the responsibility where it belongs, on the platforms that design addictive products for minors."
- Senator Richard Blumenthal (D‑CT), lead sponsor of the Senate version, in a statement released minutes after the House vote.
But the real action happened off the floor. The House Rules Committee, in a closed door session on Monday night, adopted a manager's amendment that made three critical changes to the Kids Online Safety Act. They narrowed the definition of "likely harm" to exclude speech that is merely controversial. They added a carve out for school issued devices. And they inserted a preemption clause that would block state laws like California's Age Appropriate Design Code Act, which has been tied up in the Ninth Circuit since September 2023.
What the Bill Actually Does: The Duty of Care Trap
Here is the part they did not put in the press release. The core legal mechanism of the Kids Online Safety Act is a "duty of care" for platforms likely to be accessed by minors. This is not a vague suggestion. It is a legally enforceable obligation to prevent and mitigate specific harms: anxiety, depression, eating disorders, substance abuse, and sexual exploitation. Platforms must design their products to minimize these harms under the threat of Federal Trade Commission enforcement and state attorney general lawsuits.
Let us break down the legal math here. The duty of care language is borrowed directly from common law tort principles, but applied to content moderation decisions. A platform that recommends a pro anorexia video to a 14 year old user could face a civil penalty of up to $50,000 per violation. Multiply that by millions of daily recommendations and the liability exposure is existential for any mid sized social media company.
But wait, it gets worse. The bill requires platforms to provide "robust tools" for parental supervision, including mandatory age verification. The exact method is left to the FTC, which must issue rules within 18 months. That rulemaking process will be the bloodiest lobbying battle since net neutrality. Every option, from government ID uploads to behavioral analysis, carries enormous privacy and free speech implications.
Under the Hood: How the Kids Online Safety Act Would Rewrite the Internet's Plumbing
To understand what this bill actually does, you have to look at the technical infrastructure it targets. The Kids Online Safety Act does not just regulate content. It regulates algorithms. Specifically, it requires platforms to turn off algorithmic recommendations for users under 17 unless the platform can demonstrate that the recommendation system does not contribute to the enumerated harms. That is a technical challenge that no company has solved. Not Meta. Not TikTok. Not YouTube.
"We are deeply concerned that the duty of care provision will be used to pressure platforms into removing a broad range of content that is perfectly legal but controversial, including LGBTQ resources, sexual health information, and political speech. This bill is a censorship machine dressed as a safety net."
- David Greene, Senior Staff Attorney, Electronic Frontier Foundation, in written testimony submitted to the House Energy and Commerce Committee in October 2024.
The age verification requirement is the other landmine. Every major internet infrastructure group, from the Internet Society to the World Wide Web Consortium, has warned that widespread age verification will create a honeypot of biometric data vulnerable to breach. The Kids Online Safety Act does not mandate a specific technology, but the practical effect will be that platforms require government ID or selfie analysis for every user who might be under 17. That covers roughly 70 percent of the internet user base in practice because platforms cannot afford to guess wrong.
The Privacy Paradox: More Data to Protect Kids, Less Privacy for Everyone
Here is the tension that the bill's sponsors refuse to address directly. The Kids Online Safety Act requires platforms to collect more data about age and behavior to enforce safety rules, but it also requires them to limit data collection for advertising purposes. Those two goals are in direct conflict. To know if a 16 year old is being shown a harmful recommendation, the platform must analyze that 16 year old's behavior in granular detail. But the bill's data minimization clause says platforms shall not collect data "beyond what is reasonably necessary" to provide the service. The FTC will have to define what "reasonably necessary" means, and that definition will determine whether the bill actually reduces data collection or merely legalizes massive surveillance under a safety pretext.
- Real World Precedent: The UK's Age Appropriate Design Code (Children's Code) took effect in 2021. A 2023 study by the Ada Lovelace Institute found that platforms responded by implementing age gates that ask for birth dates but do not verify them, rendering the code largely symbolic. The Kids Online Safety Act includes enforcement mechanisms that the UK code lacks.
- Technical Reality: Any reliable age verification system, whether based on document uploads or facial age estimation, creates a record that can be subpoenaed by law enforcement, hacked by criminals, or exploited by the platform itself. The Kids Online Safety Act contains a data security requirement but does not ban the retention of verified age data.
The Skeptic's View: A Censorship Machine Dressed as a Safety Net
The opposition to the Kids Online Safety Act is not coming only from tech libertarians. The American Civil Liberties Union, the Center for Democracy and Technology, the Electronic Frontier Foundation, and the Woodhull Freedom Foundation all oppose the bill in its current form. Their concern is not hypothetical. It is based on the real world history of Section 230 reform and the FOSTA SESTA disaster of 2018.
FOSTA, the Fight Online Sex Trafficking Act, was passed with similarly bipartisan support and similarly noble intentions. It created a new federal crime for platforms that knowingly facilitate sex trafficking and carved out an exception to Section 230 immunity. The result was catastrophic. Platforms like Backpage were shut down by the Department of Justice, but so were swaths of consensual sex worker advertising and health information. A 2020 study in the Journal of Law and Economics found that FOSTA led to a 26 percent increase in street based sex work homicides. The Kids Online Safety Act uses a similar legal architecture: a new duty of care enforced by criminal and civil penalties that will inevitably cause platforms to over remove content to avoid liability.
- ACLU Position: The bill "would require platforms to censor a wide range of constitutionally protected speech, including information about gender identity, sexual health, and political activism."
- EFF Position: The duty of care "creates a private right of action for state attorneys general to sue platforms over content they dislike, effectively turning state governments into content arbiters for the internet."
The Corporate Courtship: Why Some Tech Giants Quietly Cheered
Here is the part that does not fit the good versus evil narrative. Many large technology companies have quietly supported the Kids Online Safety Act, or at least declined to actively oppose it. Meta (Facebook and Instagram) has publicly said it supports the bill's goals while expressing concern about implementation. Apple has been silent. Google's parent Alphabet has been quietly neutral. The reason is not benevolence. It is regulatory capture.
Large platforms already have the engineering resources and legal teams to comply with complex safety mandates. They already do age verification at scale through their advertising systems. The Kids Online Safety Act imposes compliance costs that will hit smaller platforms and startups much harder, effectively raising a moat around the incumbent giants. A new social media platform that wants to compete with TikTok would have to build an FTC compliant safety system from day one, a multi million dollar fixed cost that few venture backed startups can afford.
The bill also contains a provision that immunizes platforms from state lawsuits if they comply with a "national strategy for children's online safety" to be developed by the National Telecommunications and Information Administration. That preemption clause is the real prize for Silicon Valley. It would override California's Age Appropriate Design Code Act and similar laws being considered in New York, Texas, and Florida. One federal standard is easier to manage than 50 state standards.
The Free Speech Coalition: Who Fought This and Why
The formal opposition to the Kids Online Safety Act includes an unusual coalition of right wing free speech groups and left wing civil liberties organizations. NetChoice, the trade association that includes Meta, Google, and Amazon, has filed amicus briefs arguing that the bill violates the First Amendment. The American Library Association opposes it. The Electronic Frontier Foundation has launched a campaign against it.
"This bill is a state sponsored content moderation system. It gives politicians in Washington the power to decide what speech is 'harmful' for young people, and that power will be exploited by whoever holds the majority in five years."
- Evan Greer, Director of Fight for the Future, in a press release today.
The House Judiciary Committee, which was bypassed during the bill's markup, issued a report on Monday arguing that the Kids Online Safety Act would "fundamentally alter the First Amendment landscape" by creating a government enforced standard for speech on social media. The committee's Republican majority pointed to the duty of care as an unconstitutional prior restraint, while the Democratic minority argued that the bill's narrow definition of harm protects speech rights.
The Enforcement Nightmare: How the FTC Would Actually Implement This
The Kids Online Safety Act gives the Federal Trade Commission rulemaking authority under the Administrative Procedure Act. That means any final rule will be subject to years of litigation. The bill requires the FTC to issue proposed rules within 12 months of enactment and final rules within 18 months. But the reality is that a contested rulemaking on a topic as complex as algorithmic design and age verification will take three to five years, assuming no new administration kills it outright.
The bill also creates a new bureau within the FTC dedicated to children's online safety, funded by a $500 million authorization over five years. That is real money, but it is a fraction of what the agency says it would need to hire the engineers and data scientists required to audit platform algorithms. The FTC currently has fewer than 50 staff with technical expertise in recommendation systems. Enforcing the Kids Online Safety Act would require at least 200 to 300 specialists, according to a June 2024 report from the Government Accountability Office.
The Kicker: What Happens Now
The House passed the Kids Online Safety Act today. The Senate passed its version six months ago. The two bills are not identical. The House version includes the preemption clause for state laws. The Senate version does not. The House version has a narrower definition of harm. The Senate version is broader. The House version exempts school devices. The Senate version does not. These differences will be resolved in a conference committee that will meet behind closed doors over the next several weeks.
President Biden has said he will sign the bill. The real question is not whether it becomes law. It will. The real question is whether the bill, once signed, can survive the first constitutional challenge, which will be filed within hours of enactment. NetChoice has already announced it will sue. The ACLU has said it will join. The challenge will center on the First Amendment and the duty of care standard, a legal theory that has never been tested at the Supreme Court in the context of content moderation.
The Kids Online Safety Act is now the most advanced piece of internet regulation in American history. It will either become the template for a safer digital world for children, or it will become the next FOSTA, a well intentioned disaster that causes far more harm than it prevents. The answer will not come from Congress. It will come from the courts, from the engineers who build the compliance systems, and from the teenagers whose speech gets swallowed by the safety machine no one asked for.
Frequently Asked Questions
What is the Kids Online Safety Act (KOSA)?
KOSA is a bill passed by the US House that requires online platforms to implement safety measures for minors, such as restricting harmful content and default privacy settings.
Who does KOSA protect?
The act protects users under the age of 17 by compelling platforms to prevent harm like cyberbullying, predatory behavior, and promotion of suicide or eating disorders.
What obligations does KOSA impose on platforms?
Platforms must provide minors with options to limit their data sharing, disable addictive features, and receive warnings about excessive use.
Why is KOSA controversial?
Critics argue its language could be used to censor LGBTQ+ or health-related content, raising free speech concerns.
What happens next for KOSA?
KOSA now moves to the Senate, where it faces amendments and a vote before potentially becoming law.
💬 Comments (0)
No comments yet. Be the first!




