The LLM Public Offering: Why One S-1 Filing Will Reshape AI’s Governance

By: Joyce Jia

Photo by Beyzaa Yurtkuran on Pexels.com

In early December 2025, the Financial Times reported that Anthropic retained the Palo Alto-based law firm Wilson Sonsini to help facilitate a public offering as early as 2026. The move may signal a strategic effort to outpace rival OpenAI, which had also been eyeing a 2026 listing. Following the report, OpenAI executives privately expressed concern about being beaten to market. Regardless of who lists first, 2026 is shaping up to be a landmark year for AI in public markets, and a long-awaited opportunity for investors and observers to lift the veil on the economics of the generative AI giants driving the Fourth Industrial Revolution.

More than an IPO: A Governance Blueprint for the AI Era

As AI systems increasingly power healthcare systems, financial markets, communications, and public services, they are functionally becoming part of the nation’s core infrastructure. Unlike pharmaceuticals subject to FDA approval, financial intermediaries registered with the SEC, or telecommunications providers licensed by the FCC, frontier AI developers operate without a comparable, unified regulatory gatekeeper. Existing oversight has been fragmented across state privacy regimes and FTC consumer protection authority, leaving the public with significant gaps in understanding how these companies operate, what their models optimize for, and how deeply they depend on cloud infrastructure, semiconductor supply chains, and energy capacity. Moreover, propelled by unprecedented growth trajectories, they have financed their expansion almost entirely through venture capital and private placements, without being subject to public disclosure obligations and public accountability that securities registration demands. 

When Anthropic or OpenAI files for its initial public offering, it will become the first frontier AI company to submit its business model to the full discipline of SEC disclosure. The filing will not merely determine one company’s market valuation. It will likely shape the disclosure template and regulatory precedent governing how artificial intelligence integrates into public capital markets, compelling for the first time a legally enforceable accounting of how frontier AI companies actually operate, what system risks they pose, and whether traditional corporate governance frameworks are adequate to contain them. 

Tip of the Iceberg: What SEC Disclosure Will Finally Force Frontier AI to Reveal

The S-1 registration statement must include all information specifically required by Form S-1, as well as any material information needed to prevent the included statements from being misleading. Current SEC disclosure requirements do not specifically address modern AI risks, though the SEC’s Division of Examinations has identified AI as a priority focus area. Under established materiality doctrine, disclosure is required for any matter to which a reasonable investor would attach importance in deciding whether to purchase a security. Furthermore, Item 105 of Regulation S-K requires a “Risk Factors” section discussing material factors that make an investment speculative or risky, with a concise explanation of how each risk affects the registrant.

Under this framework, investors will likely see for the first time the size of an LLM company’s accumulated deficit and its projected path to profitability. They will also gain visibility into the terms of related-party contracts and material agreements with cloud and semiconductor suppliers who serve as both vendors and strategic investors. Disclosure will additionally include training data liabilities such as the Bartz v. Anthropic settlement. Finally, “Business” and “Management’s Discussion and Analysis” sections may provide a first-of–its-kind monetization narrative, explaining what these companies’ algorithms are optimized for, what commercial incentives are encoded in model training, and how ecosystem allocation decisions generate indirect value. Together, these disclosures will expose the structural economics of frontier AI development in a way that no private financing round has ever required.

Disclosure priorities will also differ across companies. Anthropic’s Long-Term Benefit Trust grants outside directors veto power over decisions conflicting with the company’s safety mission, even where those decisions would maximize shareholder value. This structure directly challenges the shareholder primacy doctrine established in Dodge v. Ford, the 1919 Michigan Supreme Court decision holding that corporate directors must prioritize shareholder returns over broader social objectives. The arrangement raises a critical question: Can mission-driven governance survive activist investors and proxy battles once the company goes public? 

OpenAI, having completed its conversion to a Delaware public benefit corporation in October 2025, faces a structurally distinct set of disclosures: the terms of its nonprofit’s retained equity stake, the scope and exclusivity of its Microsoft dependency, and the resolution of Elon Musk’s active litigation challenging the conversion will each require disclosure under existing SEC frameworks in configurations not previously tested at this scale.

Materiality Standards for the AI Future

Much of what frontier AI companies should disclose is already compelled by existing SEC rules. What is needed is rigorous application of the comment process to establish materiality standards specific to the AI sector. Anthropic published a new model Constitution last month articulating how its AI systems should reason and prioritize competing values. OpenAI, for its part, has committed to a governance structure that retains a nonprofit equity stake precisely to preserve the primacy of safety and security obligations even after its corporate restructuring. Both are well positioned to lead on disclosure voluntarily, setting best practices that regulators and competitors will be compelled to follow. That makes these securities filings as consequential as any AI safety research. Markets, no less than models, need alignment.

When Proof Becomes Partisan: How AI Is Fracturing Our Shared Reality

By: Claire Kenneally

What’s Happening? 

Note: This post contains photos of weapons and discusses violence that may be triggering. 

The shootings of Renee Good and Alex Pretti incited the nation, as viewers across the country tuned in to the violence in Minneapolis, Minnesota. Both Good and Pretti were protestors killed by U.S. Immigration and Customs Enforcement (ICE) agents employed in Operation Metro Surge– a large scale immigration raid that resulted in the detention over 4,000 documented and undocumented Minnesotans. 

But instead of prompting collective grief, the country’s varied reactions only deepened the sense that Americans now inhabit entirely different moral and political universes. In part, this stems from media coverage surrounding the shootings. AI-generated media is accelerating political polarization by reshaping public perception of violent events before facts can stabilize.

Where Does Our News Get Our News? 

News networks have always had biases. But the internet and social media have shifted networks’ priorities from well-researched and fact-based journalism to punchy pieces aimed at generating clicks. A study by UCLA Professor Arash Amini described the media’s perpetuation of misinformation as an “arms race in which mainstream media outlets struggle to stand out amid a flood of content.” 

When Renee Good was shot by ICE on January 7th, 2026, online sleuths were quick to create and point to video footage of the moments leading up to the shooting. One X user, a self-proclaimed “Trump Loyalist Parody Account” shared an aerial-view photo of Good’s car that quickly gained traction online. The original post did not disclose that the photo was AI, nor that it was generated only to show why the X user personally believed agents should be allowed to use lethal force. A later post admitted the photo was AI-generated, after Snopes and Lead Stories debunked it (in part because the falsified photo included imagery like car doors open that were actually closed, and bystanders in different outfits than they had on in verified video footage). But the original photo, without the subsequent post’s clarification, had already spread across the internet. 

Image taken from Twitter user @ScummyMummy511/Max Nesterak/Snopes Illustration

In the wake of Alex Pretti’s death a few weeks later, social media users flooded the internet with AI-generated images of him holding a gun, while video footage of the shooting later revealed him to be holding his cell phone.  Other doctored footage circulated inaccurately showed an agent holding a gun to the back of Pretti’s head while he lay prone on the ground. 

Photo taken from the January 25th, 2026 New York Times article “False Posts and Altered Images Distort Views of Minnesota Shooting”

Ironically, the AI-enhancement of media to justify a singular political narrative is not a partisan issue. Viewers across the political spectrum suffer when the news they consume is riddled with inaccuracies, even if those inaccuracies confirm their biases. This, in turn, only serves to widen the gap between Americans- how can you converse about an important social issue when you’re each convinced you have the concrete evidence, and that it supports your opinion? 

Long-Reaching Implications

In the cases of Good, Pretti, and countless other victims of violence, another troubling aspect in the coverage of their deaths is how their images are reshaped posthumously to fit familiar narratives and biases. 

Altered photos of Good were circulated in the wake of her death, claiming she’d posted in celebration of Charlie Kirk’s shooting earlier this year. This led to some Conservative-leaning viewers calling her death “karma”- stoking animosity against her even after the photo’s authenticity was fact-checked and disproven. 

In Pretti’s case, multiple stations accidentally used an AI-doctored photo of him in their initial coverage. Some speculate the edited photo was created to make him appear “less Jewish”. Others suggested it was to make him “. . . a more appealing ‘poster boy’ for the anti-ICE movement.” MSNBC, who used the altered photo, claimed that they merely pulled it from the internet after searching Pretti’s name. Knowing AI’s predilection for generating images that favor whiteness, conventional beaty standards, and imbedded racial biases, it’s unsurprising, but disturbing, how mundane Google Image searches further subtle whitewashing and stereotyping under the guise of algorithmic neutrality.

Photo taken from the January 28th, 2026 New York Post article “AI-altered pic of Alex Pretti is being widely circulated after his killing by Border Patrol” 

The Legal Implications

Currently there are few legal protections in place to counteract AI “enhanced” media. The current administration has taken a strong stance against AI regulation, passing Executive Order 14179 in  January 23, 2025 (“Removing Barriers to American Leadership in Artificial Intelligence”) and Executive Order 14365 of December 11, 2025 (“Ensuring a National Policy Framework for Artificial Intelligence”). Both orders removed barriers like state-wide AI regulations to “encourage adoption of AI applications across sectors.”  

The legislative branch has taken a different approach than the executive. The TAKE IT DOWN Act was signed into law in May 2025 and prohibits the nonconsensual online publication of intimate visual depictions of individuals, both authentic and computer-generated. However, the act focuses on sexual imagery and would not apply to Good or Pretti.  

The 2025 proposed NO FAKES Act holds more promise, as it would make individuals and entities legally responsible for creating unauthorized digital copies of a person. The Act is focused on establishing intellectual property rights against AI-generated deepfakes and creating a legal recourse for individuals whose likeness is used without consent. If this bill passes, it could create a way for members of Good and Pretti’s families to sue in civil court for damages. 

So What Am I Supposed to Do?  

The 24-hour news cycle is not slowing down. But with minimal legal recourse and an executive office that happily generates AI misinformation as freely as fringe conspiracy theorists, sometimes it feels that there’s not much to do but hope Congress acts quickly. However, one intermediate solution is turning to local, community journalism instead of national pundits. In the realm of immigration crackdowns, protest coverage, and updates on Minneapolis, consider:  

In the emerging age of AI narratives and de-regulation, staying truly informed now means choosing intention over speed, resisting easy narratives, and doing the critical thinking ourselves. 

This article acknowledges and honors all individuals who have been murdered by ICE and the Department of Homeland Security in 2026: Keith Porter, Parady La, Heber Sanchaz Domínguez, Victor Manuel Diaz, Luis Beltran Yanez-Cruz, Luis Gustavo Nunez Caceres, and Geraldo Lunas Campos, Renee Good, and Alex Pretti. Learn more about their stories here

#Deepfakes #A.I.Misinformation #WJLTA

The “Veto Power” of Fragments: Why A$AP Rocky’s “Don’t Be Dumb” Almost Didn’t Exist

By: Francis Yoon

After an eight year hiatus and a chaotic three year rollout plagued by leaks and complex clearance battles, A$AP Rocky finally released his fourth studio album, Don’t Be Dumb, on January 16, 2026. The album’s success was immediate, debuting at number one on the Billboard 200 and breaking streaming records for the year. Yet, for many in the industry, the album’s protracted journey remains a sobering case study in intellectual property gridlock. Behind the scenes, the project was reportedly paralyzed for years by the administrative burden of sample clearances, a process that grants recording owners absolute discretionary authority to block a release. Rocky’s public admission that “sample clearances” were disrupting the album underscores a growing crisis in music law: the absolute “veto power” of sound recording owners and the conspicuous absence of a compulsory licensing system to protect transformative art in the digital age.

The “Two-Tiered” Trap of Music Copyright

To understand the bottleneck, one must examine the two distinct copyrights inherent in every recorded song. The first is the musical work, which encompasses the compositional “DNA” of the song, including melody, lyrics, and arrangement. Under Section 115 of the Copyright Act, musical works are subject to “compulsory license,” a vital safety valve that allows an artist to record a cover of a song without seeking original owner’s permission, provided they pay a government-set statutory rate. This system ensures creators receive compensation while preventing them from impeding the progress of science and useful arts by gatekeeping a melody.

The second copyright is the sound recording, often referred to as the “master.” Unlike the composition, sound recordings are governed by Section 114, which offers no such compulsory mechanism. The owner of a recording has absolute discretion to say “no” for any reason, demand 100% of a new song’s equity, or simply ignore a request indefinitely. In Rocky’s case, this discrepancy meant that while he could easily cover a song, his attempt to sample existing recordings turned his creative process into a multi-year hostage situation.

The Legacy of Bridgeport and the Death of De Minimis

The current “veto power” is not just a statutory quirk; it is the product of a rigid judicial history. In the 2005 case Bridgeport Music v. Dimension Films, the Sixth Circuit famously decreed, “Get a license or do not sample.” This ruling effectively killed the de minimis defense for sound recordings, which is the longstanding legal principle that the law does not concern itself with trifles. While a filmmaker might display a copyrighted logo in the background of a shot under “fair use,” a musician today cannot use a one-second audio fragment or a distorted snare hit without risking suppression, as exemplified by the injunction ordering Biz Markie’s album I Need a Haircut to be pulled from sale.

This creates a massive “holdout” problem. Because there is no legal “safe zone” for even the smallest snippets, legacy labels and rights holders are incentivized to extract “ransom” prices as seen in the dispute between The Verve and ABKCO Records over the song “Bittersweet Symphony.” The labels and right holders know that a global superstar’s entire rollout, including merchandise deals with Puma, film collaborations with Tim Burton, and worldwide tour dates, is at the mercy of a tiny audio fragment. This is an administrative nightmare that prioritizes legacy gatekeeping over modern market efficiency.

The Absolute Property Counterargument: Absolute Control vs. Cultural Ingredients

During the development of this analysis, a fundamental challenge arose: “If I own the rights to a theme as iconic as Star Wars, shouldn’t I have the absolute right to say no to anyone else using it?” This represents the strongest argument favoring the status quo. It is rooted in the “Moral Rights” tradition, the principle that creators should maintain complete control over how their “spiritual child” is presented to the world. Under this view, if A$AP Rocky wants to use someone else’s property, he must accept the owner’s rules, no matter how protracted the negotiation becomes.

However, this “absolute property” model ignores the unique way that music, and specifically sampling, functions as a conversation across time. When we treat a three-second audio fragment with the same legal weight as a full-length film or a symphony, we create an intellectual property “thicket” that makes new creation nearly impossible. A compulsory license wouldn’t constitute appropriation but rather would replace an absolute injunctive right with a remunerative right. Just as a homeowner can’t always prevent the city from building necessary infrastructure through their land, provided they are fairly compensated, the law should recognize that once a sound becomes a part of a genre, the original owner’s “veto power” should yield to a fair, standardized compensation system.

Market Failure in the Era of Perfect Enforcement

The problem has been exacerbated by the arrival of near-perfect enforcement technology. In the 1990s, artists could “flip,” pitch-shift, or bury samples so deep that they became unrecognizable to the human ear. Mobb Deep’s “Shook Ones Pt. II” (1995) remained one of hip-hop’s greatest mysteries for 16 years because the producer, Havoc, “buried” the sample so effectively that even the most dedicated crate-diggers couldn’t identify it until 2011. However, by 2026, AI powered digital fingerprinting has become a ubiquitous “digital dragnet“, catching even the most transformed audio textures. This combination of zero tolerance law and perfect detection technology has eliminated the “human” element of risk taking that built early hip-hop.

When transaction costs for clearing a brief sound exceed the value of the sound itself, the market has failed. The manual process of tracking down every sample owner, who may be spread across different labels and estates, creates a barrier to entry that disproportionately affects independent creators. For every superstar like Rocky who can eventually afford a three-year delay, thousands of independent artists see their projects simply die in an inbox.

Conclusion: A Compulsory Sampling License to Safeguard Innovation

The solution lies in creating a “Compulsory Sampling License” similar to the existing framework for cover songs. The law should provide a tiered statutory rate for sound recording fragments based on the length of the sample and the degree of transformation. By creating standardized pricing for samples below a certain threshold, the law would eliminate years of manual negotiation and prevent the “veto power” from being used as an anti-competitive weapon.

A$AP Rocky’s Don’t Be Dumb is a triumph of persistence, but its journey shows that our IP laws are currently built for protection at the expense of progress. By maintaining absolute veto over fragments, we are not just protecting property; we are stifling the next generation of masterpieces. It is time for the law to recognize that in a world where art is increasingly a “melting pot” of styles and sounds, a few seconds of audio should not be enough to stop the music.

Driving Change: Washington State Legislature Considers New Regulations for ALPRs

Photo by Valentin Ivantsov on Pexels.com

By: Anusha Nasrulai

The Washington state legislature is currently considering the Driver Privacy Act, a bill regulating automated license plate readers (ALPRs). ALPRs are cameras that capture license plate information of a vehicle along with location and time information. Currently, many agencies’ retention and sharing of ALPR data is subject only to internal policy, if any exists. The surveillance capabilities of ALPR systems have profound consequences, such as chilling the exercise of civil liberties and invading the privacy of vulnerable individuals, including immigrants or people who come to Washington to access reproductive or gender-affirming health care. The Driver Privacy Act presents the opportunity to raise the floor of privacy protections afforded by the Federal and Washington State Constitutions.

The Driver Privacy Act limits authorized uses of ALPRs to law enforcement, parking and toll enforcement, and transportation agencies. The current bill also puts forth warrant thresholds, a 21-day data retention period, and auditing requirements for agencies. Additionally, the bill places restrictions on accessing and sharing ALPR data and creates civil and criminal liability for violations under the Act. The bill puts specific prohibition on ALPR use around protected activities, including exercising First Amendment rights and accessing healthcare.

Advocates are calling on lawmakers to strengthen the bill by further limiting the data retention period and prohibiting third party vendors and agencies from sharing ALPR data without a warrant.

Eyes on Washington

ALPRs have faced increased scrutiny in Washington state in the past year. In October 2025, the UW Center for Human Rights released a report exposing how immigration enforcement and other out-of-state law enforcement access data from ALPR systems operated by Washington agencies. This is despite the Keep Washington Working Act and Shield Law that are intended to limit local law enforcement assisting federal immigration enforcement and “protect[] people in Washington from civil and criminal actions in other states that restrict or criminalize reproductive and gender-affirming care.” Many municipalities have halted their use or procurement of ALPRs, at least till the state passes guidance. Washington now is at a turning point, to either implement guardrails that protect individuals’ privacy from undue government surveillance, or pass legislation that sanctions expanded use of ALPRs across Washington state.

Flock Safety, a widely-adopted ALPR vendor in Washington, has faced “growing controversy” for enabling ALPR data operated by public agencies to be accessed by federal agencies and other jurisdictions. As of this June, Washington has 80 cities, six counties, and three tribes using Flock cameras. While Flock Safety has received nationwide attention recently, there are other prominent ALPR vendors used in Washington, including Vigilant Solutions (Motorola) and Axon (formerly Taser)

How ALPR data is used, stored, and shared matters because these cameras capture more than license plate information. Law enforcement use ALPRs during real-time investigations, by checking a vehicle’s license plate information against a “hot list” of vehicles associated with an investigation or reported crime. The information collected by ALPRs can be cross-referenced with other law enforcement or public agency databases to identify individuals which vehicles are registered to. Law enforcement can also search historic ALPR data to track the direction, speed, and travel patterns of a vehicle. In aggregate, ALPR data can reveal sensitive information about where an individual frequents and their travel patterns. Furthermore, ALPR photos can capture the likeness of drivers, passengers, and nearby surroundings.

Constitutional Concerns

The specific protections provided by the Driver Privacy Act are guided by Constitutional principles. The surveillance capabilities of ALPRs can have a “chilling effect” on the exercise of Constitutionally protected activities. Awareness of constant surveillance may alter or deter people from exercising their protected rights of expression, association, and religion. These concerns are valid given historic law enforcement surveillance of political rallies, protests, and places of worship.

ALPRs also implicate the Constitutional right to privacy. The Supreme Court has not required a warrant for law enforcement to collect and search license plate information because there is a diminished expectation of privacy due to the systematic regulation of vehicles and drivers’ movements taking place on public roads. Though, the Court has also addressed in United States v. Jones , Carpenter v. United States, and Kyllo v. United States, whether law enforcement’s use of emerging surveillance technologies infringes on Fourth Amendment protections.

The Supreme Court has found that a warrant is required before installing a GPS tracker or obtaining cell site location information (CSLI) to track an individual’s long-term movements. Justice Sotomayor wrote a concurring opinion in Jones highlighting how emerging technologies have enhanced law enforcement surveillance capabilities without physical intrusion: “GPS monitoring generates a precise, comprehensive record of a person’s public movements that reflects a wealth of detail about her familial, political, professional, religious, and sexual associations.” The aggregation of ALPR data to reveal historical travel patterns raise concerns similar to those articulated in the Jones concurrence. Similarly, the Court in Carpenter, was concerned by how CSLI time-stamped data provides an intimate view into a person’s life and is cheaper and more easily accessible compared to other surveillance strategies. This is comparable to retained, historical ALPR data.

The Supreme Court recognized in Kyllo that people’s Fourth Amendment protections should not be left to “the mercy of advancing technology.” Law enforcement use of sense-enhancing technology in the form of infrared scanners to collect information “that could not otherwise have been obtained without physical ‘intrusion into a constitutionally protected area,’” was found to be a search subject to Fourth Amendment protections. While cars have lesser Constitutional privacy protections than homes, modern ALPR systems with embedded AI also provide law enforcement with extra-sensory capabilities that may implicate the Fourth Amendment.

Federal courts have yet to conclude that police use of ALPRs violate Fourth Amendment search and seizure requirements. The Washington State Constitution provides an affirmative right to privacy, enshrined in Section 1, Article 7. Washington courts have interpreted this article to create a higher standard for lawful search and seizures. As of December 2024, 18 states already have ALPR laws, with more states considering passing ALPR legislation. Washington is now contemplating joining these states in passing ALPR regulations.

Where the Bill Stands Now

The Driver Privacy Act has already passed in the Senate with amendments. The House Civil Rights & Judiciary Committee will hold a public hearing in consideration of the amended bill on February 18th.

The Driver Privacy Act not only regulates the use of ALPRs in Washington state, but it can also create meaningful privacy protections for all Washingtonians.

Reclaiming Urban Housing: A Case Study on Regulating Online Platforms

By: Matt Unutzer

Sky-high rents are a defining feature of modern urban life. Among the many forces blamed for rising housing costs, one issue has drawn sustained regulatory attention: the conversion of long-term housing into short-term rentals (STRs) listed on platforms such as Airbnb and VRBO. Critics argue that when apartments and homes are diverted into the short-term market, overall housing supply shrinks, placing upward pressure on rents and home prices. In response, cities across the country have spent the last decade experimenting with new regulatory frameworks aimed at curbing the perceived housing impacts of STR proliferation. The following sections examine how Washington D.C., Santa Monica, and New York City regulate short-term rentals, and, in doing so, illustrate the boundaries of regulating online platforms.

Washington D.C.’s Host-Liability Model

Most cities regulating short-term rentals utilize a common approach: placing compliance obligations on individual property owners and enforcing violations through traditional municipal oversight. Washington D.C. exemplifies this default model.

Washington, D.C.’s short-term rental law requires hosts to register with the city and obtain a short-term rental license before offering a unit for rent. Hosts are generally limited to operating a single short-term rental associated with their primary residence. Operating without a license or offering an unregistered unit may result in civil penalties or license suspension.

Enforcement authority rests with the city’s Department of Consumer and Regulatory Affairs, which investigates violations through complaints, audits, and reviews of booking activity. The city bears responsibility for identifying noncompliant listings and linking them to individual hosts; for these activities, penalties are imposed directly on hosts who violate the law.

This regulatory model imposes limited duties on booking platforms. Platforms are not required to independently verify license status before allowing a listing to appear; further, these booking services may only be fined for processing a booking when the city has already identified the underlying listing as non-compliant and sent the platform notice. Platforms are required to submit periodic reports to the city identifying short-term rental transactions and associated host identity information to aid the city in identifying unlicensed STRs.

This host-based enforcement model places significant administrative demands on the city’s enforcement entity, requiring the city to identify noncompliant listings, trace them to individual operators, and pursue penalties. Furthermore, because unlawful listings may remain active until discovered, this approach does not guarantee the reduction in short-term rental activity that the regulatory framework seeks to achieve.

Santa Monica’s Platform-Liability Model

In response to the administrative burdens and enforcement limitations associated with a traditional host-based enforcement model, some cities have adopted regulatory frameworks that shift liability for unlicensed STR bookings upstream to the platforms themselves. Santa Monica represents one of the clearest examples of this model.

Santa Monica’s short-term rental ordinance requires hosts to obtain a city-issued license before offering a short-term rental and provides for a municipal registry of all licensed STR hosts. The ordinance makes it unlawful for a booking platform to complete a short-term rental transaction for any host that does not appear on the City’s registry, attaching civil fines for each such transaction.

In contrast to a host-based enforcement model, this regulatory framework has proved successful in realizing desired STR reductions. However, the imposition of fines on the platforms themselves poses the question of how far municipalities may go in regulating the online platforms which operate in their communities.

That question was addressed in HomeAway.com, Inc. v. City of Santa Monica, where short-term rental platforms Airbnb and HomeAway.com challenged the ordinance claiming immunity for fines under Section 230(c)(1) of the Communications Decency Act. Section 230(c)(1) provides online platform immunity for the content it hosts if posted by third parties. In so doing, it draws a line between platforms themselves and the third-party “publisher or speaker” of the content. In the platforms’ view, Santa Monica’s ordinance effectively established platform liability for the third-party listing content hosted on the platform.

The Ninth Circuit rejected this argument, holding that the ordinance did not impose liability for publishing or failing to remove third-party content, but instead regulated the platforms’ own commercial conduct by imposing fees when the platforms completed booking transactions for short-term rentals of unregistered properties.

While the courts have upheld Santa Monica’s use of platform liability as a lawful enforcement mechanism, the platform-liability model does not substantially reduce the administrative burden borne by the city. Enforcement still requires the city to identify individual non-compliant transactions and pursue penalties against the platforms that facilitated them.

New York City’s Affirmative Duty to Verify Model

The most aggressive iteration of STR regulation laws is found in New York City’s Local Law 18. Local Law 18, enacted on January 9th, 2022, establishes an automated STR registration verification system. First, an STR host is required to register with the city, which assigns them an STR registration number. Second, the ordinance provides for an electronic verification portal where platforms must submit a prospective host’s STR registration number and receive a confirmation code prior to processing a booking with that host. The ordinance also includes a mandatory reporting requirement directing STR platforms to submit an inventory of all STR transactions completed each month and certify that they received a confirmation code from the city’s verification portal prior to each booking.

This innovative regulatory framework automates compliance, ensuring the desired reduction in STRs is realized while minimizing the administrative burden of enforcement. However, this verification-based model has not yet been directly evaluated under Section 230. Curiously, Airbnb has not chosen to challenge the law under Section 230 and instead has largely complied with the regulatory regime, focusing its efforts on lobbying instead. Perhaps the platform has “read the tea leaves” of past lawsuits, such as the aforementioned Santa Monica suit, and determined that when liability is tied to a commercial transaction, platforms cannot claim section 230 immunity.

There are, however, material differences between the two frameworks. In Santa Monica, liability attaches when a platform completes a booking for a host who is not registered in the City’s STR registry. In New York City, by contrast, liability attaches because the platform failed to perform a mandated verification step prior to the booking, regardless of the host’s registration status. It remains an open question whether this structural shift––which ties liability to a platform’s screening process rather than underlying host noncompliance––moves closer to treating platforms as “publishers” in a manner that implicates Section 230’s platform-liability protections.

Conclusion

The ultimate impact of short-term rentals on local housing supply remains unsettled. What is clear, however, is that cities across the country are responding to growing concerns about the effects of STR platforms like Airbnb on housing supply. The result is an ongoing, nationwide case study on how local governments can regulate both short-term rentals and the online platforms that facilitate them. As municipalities continue to experiment with regulatory regimes, the legal boundaries emerging from these efforts may influence the future of platform regulation far beyond the housing context.

#ShortTermRentals #HousingPolicy #PlatformRegulation