Stratton Oakmont v. Prodigy Services: The Case that Spawned Section 230

By: Mark Stepanyuk

The United States led the world in internet usage throughout the 1990s and “[a]t the time of the Dot-com-crash less than 7% of the world was online.” Traversing this previously uncharted territory en masse necessitated a promulgation of rules that would govern the new frontier. Naturally, those rules emerged to conform with existing legal standards. Wrapped up in this context is a story about how the firm started by “The Wolf of Wall Street”, also known as Jordan Belfort, would have a hand in bringing about the existence of arguably the most influential legal rule shaping the internet to this day. 

Enter Stratton Oakmont v. Prodigy Services

Jordan Belfort founded Stratton Oakmont in 1986 as a brokerage firm specializing in trading “over-the-counter” securities. The world became familiar with this story when Leonardo DiCaprio portrayed a lecherous and drug-addled Belfort in the 2013 academy-award nominated film The Wolf of Wall Street

Prodigy Services was an early online service network that provided its subscribers access to various information services such as bulletin boards where third parties exchanged information. In the early-to-mid-1990s, Prodigy was considered one of the major players in the  information services space providers alongside CompuServe

Prodigy, unlike CompuServe, had “held itself out” as exercising editorial control over the content of its computer bulletin boards. One of Prodigy’s bulletin boards was called Money Talk, a popular forum where members would post and discuss financial matters. Prodigy contracted with Board Leaders (or moderators or mods in today’s parlance) to, among other things, oversee and participate in board discussions.

On October 23rd and 25th in 1994, an unidentified individual posted to the Money Talk bulletin board claiming that Stratton Oakmont committed criminal and fraudulent acts in connection with an IPO that it was involved in. The anonymous poster made statements claiming that the offering was “major criminal fraud” and “100% criminal fraud.” The individual also posted that Stratton Oakmont was a “cult of brokers who either lie for a living or get fired.” 

Stratton Oakmont and Daniel Porush—the individual that Jonah Hill’s character in The Wolf of Wall Street film was loosely based on—filed suit against Prodigy in the New York Supreme Court, the state trial court, alleging libel, among other things.

On a partial summary judgment motion brought by Stratton, the court considered Prodigy’s own statements and went through the classical libel analysis to determine whether Prodigy was a “publisher” or “distributor,” where if Prodigy was deemed a ‘publisher,’ then it would be as if they themselves had posted the allegedly libelous statements. By the way, those statements later turned out to be true

The court concluded that Prodigy was indeed a “publisher.” Reasoning that Prodigy “held itself out to the public and its members as controlling the content of [Money Talk] …,” and, by contracting with the mods, “actively utilize[ed] technology and manpower to delete notes from its computer bulletin boards on the basis of offensiveness and ‘bad taste[.]’” 

The court distinguished this holding from a 1991 case involving CompuServe four years earlier. There, the United States District Court for the Southern District of New York dismissed a libel case on the basis that CompuServe was a “distributor” (where they would only be liable if they knew or had reason to know of the libel) Unlike Prodigy, CompuServe did not review any content before it was posted to its bulletin boards. The court reasoned that, without knowledge of the libel, CompuServe would not be liable. 

Legislative Reaction to the Stratton Oakmont Case

Some legislators thought the results in Stratton Oakmont and the CompuServe case were backwards. Chris Cox (R-CA) stated that the “[t]he perverse incentive this case created was clear: any provider of interactive computer services should avoid even modest efforts to moderate the content on its site.” After seeing a Wall Street Journal article about the case, Cox reached out to Ron Wyden (D-OR) to work on the bill that would later become Section 230 in an effort to address these “perverse incentives.” This effort initially culminated in the Internet Freedom and Family Empowerment Act. The bill was enacted as part of the “Communications Decency Act,” (CDA) but when the rest of the CDA was struck down on first amendment grounds, section 230 survived. It can be found here

What does Stratton Oakmont Teach Us About Section 230 today?

Section 230 was passed largely to address those “perverse incentives” regarding moderation by online service providers. In 1990, Prodigy’s Director of Market Programs and Communications stated that “[Prodigy] make[s] no apology for pursuing a value system that reflects the culture of millions of American families we aspire to serve.” In the same NYT article, “social responsibility” was given as a reason to exercise editorial discretion—does that sound familiar? These seemingly recurring themes lead experts to opine that the current discourse about Section 230 is a bit phony—that it’s really a proxy for a conversation about the first amendment. The legal differences between a publisher and distributor are First Amendment distinctions, and since the enactment of Section 230, “that’s not really been an issue for the internet.” So functionally, those underlying First Amendment issues haven’t mattered as much in light of Section 230.

In the United States, we are still figuring out the rules of this relatively new frontier. Some folks argue that Section 230 helped make the digital economy what it is in the United States. Globally, the United States comes third in the total number of internet users with around 250 million, behind China (over 750 million) and India (over 390 million). Though here in the U.S., we will continue to arbitrate what speech should and should not be protected in light of the first amendment, it’s likely that the reasonability of how we approach an equilibrium will be a function of global influence and time. The internet rules of the future are certain to be impacted by technology (even more new frontiers) and the continued influence of globalization (i.e., different value systems, standards, and interpretations). 

Is the Art Market Ready to Change its Ways?

By: Gracie Loesser

With global sales exceeding $64 billion in 2019, the market for arts and antiquities has reached staggering heights. The state of the art market is perhaps best exemplified by the 2017 record-breaking sale of Leonardo da Vinci’s Salvator Mundi painting. Purchased at a Christie’s auction for $450 million, the work remains the most expensive painting ever sold. However, when you start to examine the details of the transaction, cracks in the art world’s shiny veneer become evident. The painting lacked a reliable provenance, or record of prior ownership, before the sale occurred. Since the sale, computer scientists, art historians, and museum experts have come forward with evidence suggesting da Vinci was not the sole artist of the work, further compromising the painting’s value. More worryingly, no one knows where the work is currently located. The official buyer was actually an intermediary for the true owner, the Crown Prince of Saudi Arabia. Despite assurances that the work would ultimately be displayed in the Louvre Abu Dhabi, the museum never received the painting. The work’s whereabouts became a source of rumor, with some reports suggesting it was in storage in Geneva and other reports stating it was on the Crown Prince’s luxury yacht

This chaos is indicative of the international art market in general, which has become a popular tool for the uber-wealthy looking for a way to shield assets and for powerful criminals to circumvent the law. As evident in the Salvator Mundi sale, the art market has historically welcomed buyer and seller anonymity. The trade of antiquities also has a culture of anonymity that facilitates illegal activities; secretive sales with unidentified parties have made it increasingly difficult for authorities to monitor and prevent the trade of looted and illegally acquired artifacts.

However, that culture of anonymity may soon be a thing of the past. In 2021, Congress passed the Anti-Money Laundering Act, placing a range of new stringent requirements on U.S. antiquities market participants. The Act adds antiquities dealers and advisors to the list of entities regulated by the Bank Secrecy Act, a statute designed to combat financial crimes by requiring certain kinds of organizations, such as banks, real estate companies, and pawnbrokers, to implement strict internal controls and to notify law enforcement of any suspicious conduct. Regulations vary by type of organization but generally require entities to verify the identities of anyone they do business with. The Financial Crimes Enforcement Network is still in the process of developing industry-specific rules for the antiquities market. Whatever they may be, they will certainly have a significant impact. 

Notably, Congress did not include art dealers and advisors in their recent legislation. Those in the art market remained understandably concerned, as evidence suggested that Congress planned to increase regulation of art sales in the future. A recent damning Senate report on the art market highlighted how the trade of high-value art undermined U.S. foreign and domestic policy. The report listed several recommendations, including adding art dealers to the institutions regulated under the Bank Secrecy Act and putting increased pressure on auction houses and art dealers to verify customers. In response to the report, the Department of the Treasury’s Office of Foreign Assets Control (OFAC) issued an advisory on high-value artwork, which stated that any individual involved in the art trade must be careful about who they do business with or face civil penalties; specifically, the advisory warned against doing business with designated terrorists or agents of any country subject to U.S. sanctions. 

Despite this evidence of growing political interest in regulating the art market, the Treasury Department released a report just a few days ago that will cause art dealers and advisors to breathe a sigh of relief. In response to the Senate’s formal request for a study investigating illegality in the art market and recommending next steps for regulation, the Treasury has now stated that it does not believe the art market requires immediate regulation. The report identifies many issues with the market but suggests Congress could institute regulatory measures at some point in the future.

The Treasury report suggests that the art market will not undergo a major overhaul in the near future. However, U.S. art dealers, advisors, and market participants would be wise to start preparing to amend their organizational processes. Although there is much still unknown about what regulations might be adopted, some resources should prove as helpful guidance. Given the similarities between the industries, the forthcoming regulations adopted for the antiquities market will provide insight into what kinds of regulations can be expected for the art trade. Additionally, organizations such as the Responsible Art Market initiative have developed guidelines for art market participants who wish to combat money laundering and other illicit activity voluntarily. Finally, one can look at recent anti-money laundering and illegal importation laws enacted in the European Union requiring compliance from the art market.

Indeed, if art dealers and other players in the world of high-value art sales want to avoid government regulation in the future, it is in their own self-interest to combat illicit activity by removing the shroud of secrecy from their transactions. 

“Age Ain’t Nothing but a Number”: the Difficulties of Age Verification Online

By: Janae Camacho

Over 20 years ago, Xanga was founded as a site for sharing music and book reviews. Looking back, it was one of the earliest social media platforms. Around the time that the internet was first coming into being, the most social media-like interaction involved logging on to Xanga and creating a blog. Today, many young people cannot recall a day when social media and the internet were not around. The internet and social media platforms have become permanent fixtures in our everyday lives; it is almost impossible to find someone not plugged into one of many social media apps, such as Tik Tok, Facebook, and Instagram. While we have all become accustomed to using these apps and their access to our information, the effects of social media on children, and social media companies’ access to children’s personal data, have become an increasing concern of parents, adults, and legislators alike. 

Various groups have made calls to action for more protection of children and their data. These calls to action have become even more apparent as these large companies seek to expand their customer base further, focusing on their most significant base, teens, and even looking to include younger children. Legislators like Rep. Kathy Castor (D-FL) have suggested further protections for children above the age of 13. Rep. Castor introduced the Protecting the Information of our Vulnerable Children and Youth Act (Kids PRIVCY Act) on July 29, 2021. The Act would, among other things, extend the protections to teens 13 through 17, require expressed consent to collect their personal information and ban advertisements directed at children.

While these suggested measures found in the Kids PRIVCY Act may help limit social media’s adverse effects on children, it is unlikely that increasing protections for those over 13 years old will prevent those harms from occurring. The complexity of enforcing such age verifications while maintaining the privacy of the minors presents a significant barrier to implementation.

What is the Children’s Online Privacy Protection Act (COPPA)?

In the 1990s, due to the ease of gathering private data from children, the public pressured Congress to protect children’s data through legislation. In response, Congress passed the Children’s Online Privacy Protection Act. COPPA was written in 1998, at the cusp of the widespread use of the internet surge, to prevent online platforms from collecting and using the personal data of children under the age of 13 for tracking and ad targeting purposes. The rule applies to operators of commercial websites and online services that are directed at children under the age of 13 years old, as well as those operators of general audience websites or online services with “actual knowledge” that they are collecting, using, or disclosing personal information from these children under 13 years old. 

What is the Protecting the Information of our Vulnerable Children and Youth Act (Kids PRIVCY Act)?

The Protecting the Information of our Vulnerable Children and Youth (PRIVCY) Act is a bill that Representative Kathy Castor (D-FL) proposed to update COPPA to meet the increased need for protection amongst a more invasive internet than in the 1990s. The proposed legislation seeks to, among other things, (1) extend the coverage to teens ages 13 to 17; (2) prohibit the collection of personal information from 13 to 15-year-old children without their expressed consent; (3) Change COPPA’s actual knowledge standard to a “constructive knowledge” standard; (4) provides a right of action to the parents of the children; and (5) ban targeted advertising that is directed at children. The bill’s drafters have included many of the critical elements of the UK’s Age Appropriate Design Code within the bill, with the aim that the companies will place the best interest of our youth over profits.

The Difficulties of Age Verification

Many companies have found it challenging to comply with the COPPA age restriction standard due to the lack of adequate, verifiable ways to identify the age of the person creating the account. Companies have used age-verification methods, such as requiring users to input their birth date, which is ultimately ineffective because it is taken at face value and not further verified by the company. With the lack of actual verification, many children under the age of 13 can bypass the age restriction by submitting a birth date that makes them at least 16 years old. According to a study conducted by Irish research center Lero, in 2021, children under the age of 13 could bypass many websites that use birthday-based age screening. The study also found that while stricter rules might help protect minors’ privacy, children were able to bypass the more stringent age verification techniques. 

Although there are many other ways that companies can determine whether users under 13 years old, such as the use of “classifiers” to assist companies in predicting age and requiring proof of suspected young users to verify their age, concern remains on whether increasing the age range subject to children’s privacy protections would further prevent the misuse of minors’ data. With the difficulty of effectively verifying children’s ages, some tech companies, such as Instagram and Meta, have suggested possible remedies for further verification. In a December 8, 2021 committee hearing, Instagram’s CEO, Adam Mosseri, indicated that parents could provide their child’s age through speaking to the phone through the app or voluntarily input the child’s age into their cell phone that could be directly accessed by all apps being used on that cell phone. Additionally, Meta has confirmed that they are looking into working with other tech companies to potentially share information in “privacy-preserving ways” to assist in determining a user’s age. These solutions may bring up other possible privacy concerns, especially if there is the use of voiceprints to verify the identity of the person who is verifying the child’s age. While these proposed remedies to the age verification issue may be the solution, it is unclear how exactly these companies will verify whether the person providing the child’s age is actually a parent.
While further regulation of these companies is needed to protect minors from the influences of social media and behavioral advertising, there is no clear way to implement such protections effectively. As we have seen through the current difficulty in enforcing COPPA age limits, children can continue to circumvent such measures with or without the age limit increase. Furthermore, any attempts to verify the ages of minor users effectively would further delve into their personal information. The privacy concerns, such as third-party sharing of information, personal data security, and the use of biometric data without consent that arise from those measures could outweigh the perceived protection of an increased age requirement. Due to the challenges with age verification, the legislature now has the task of finding a way to protect children and teens from the harms of social media and the internet. In doing so, they must balance the need for accurate, verifiable age-gating with privacy concerns, minimizing the data that these companies collect from children.

Lawmakers Set Their Sights on Restricting Targeted Advertising

By: Laura Ames

Anyone who spends time online has encountered “surveillance advertising.” You enter something into your search engine, and immediately encounter ads for related products on other sites. Targeted advertising shows individual consumers certain ads based on inferences drawn from their interests, demographics, or other characteristics. This notion itself might not seem particularly harmful, but these data are accrued by tracking users’ activities online. Ad tech companies identify the internet-connected devices that consumers use to search, make purchases, use social media, watch videos, and otherwise interact with the digital world. Such companies then compile these data into user profiles, match the profiles with ads, and then place the ads where consumers will view them. In addition to basic privacy concerns, the Consumer Federation of America (CFA) points to the potential for companies to hide personalized pricing from consumers or to promote unhealthy products and perpetuate fraud. Perhaps the largest concern is that the large stores of personal data that these companies maintain put consumers at risk of having their privacy invaded, identity theft, and malicious tracking.   

In response to these concerns, Democratic lawmakers unveiled the Banning Surveillance Advertising Act (BSSA) in an attempt to restrict the practice and under a general consensus that surveillance advertising is a threat to individual users as well as society at large. This move prompted opponents to argue that the BSSA is overly broad and will harm users, small businesses, and large tech companies alike.

What Does the BSSA Do? 

The BSSA is sponsored by Senator Cory Booker and Representatives Jan Schakowsky and Anna Eshoo. The bill bars digital advertisers from targeting their ads to users and also prohibits advertisers from targeting ads based on protected information like race, gender, religion, or other personal data purchased from data brokers. According to Senator Booker, surveillance advertising is “a predatory and invasive practice,” and the resulting hoarding of data not only “abuses privacy, but also drives the spread of misinformation, domestic extremism, racial division, and violence.”

The BSSA is broad, but it does provide several exceptions. Notably, it allows location-based targeting and context advertising, which occurs when companies match ads to the content of a particular site. The bill suggests delegating power to the FCC and state attorneys general to enforce violations. It also allows private citizens to bring civil actions against companies that violate the ban with monetary penalties up to $1,000 for negligent violations and up to $5,000 for “reckless, knowing, willful, or intentional” violations. The BSSA has support from many public organizations and a number of professors and academicians. Among several tech companies supporting the BSSA is the privacy-focused search engine DuckDuckGo. Its CEO, Gabriel Weinberg, opined that targeted ads are “dangerous to society” and pointed to DuckDuckGo as evidence that “you can run a successful and profitable ad-based business without building profiles on people.” 

The BSSA as Part of a Larger Legislative Agenda 

The BSSA is just one bill among a number of pieces of legislation aiming to restrict the power of large tech companies. Lawmakers have grown increasingly focused on bills regulating social media companies since Facebook whistleblower Frances Haugen testified before Congress in 2021. These bills target a wide variety of topics including antitrust, privacy, child protection, misinformation, and cryptocurrency regulation. Most of these bills appear to be rather long shots, however, because although the Biden administration supports tech industry reform, so many other issues are high priorities for it. Despite this hurdle, lawmakers are currently making a concerted push with these tech bills because the legislature’s attention will soon turn to the 2022 midterms. Additionally, Democrats, who have broader support for tech regulations, worry they could lose control of Congress. Senator Amy Klobuchar argued that once fall comes, “it will be very difficult to get things done because everything is about the election.” 

Tech and Marketing Companies Push Back

In general, tech companies tend to argue that targeted advertising benefits consumers and businesses alike. First, companies argue that this method allows users to see ads that are directly relevant to their needs or interests. Experts rebut this theory with the fact that in order to provide these relevant ads, tech companies must collect and store a great deal of data on users, which can put that data at risk of interference by third parties. Companies also argue that this legislation would drastically change their business models. Marketing and global media platform The Drum predicted that the BSSA “could have a massive impact on the ad industry as well as harm small businesses.” The Interactive Advertising Bureau (IAB), which includes over 700 brands, agencies, media firms, and tech companies, issued a statement strongly condemning the BSSA.  IAB CEO David Cohen argued that the BSSA would “effectively eliminate internet advertising… jeopardizing an estimated 17 million jobs primarily at small- and medium-sized businesses.” The IAB and others argue that targeted advertising is a cost-effective way to precisely advertise to particular users. However, the CFA points to evidence that contextual advertising, which is allowed under the BSSA, is more cost-effective for advertisers and provides greater revenue for publishers. 

Likelihood of the BSSA’s Success

In the past several years, there has been growing bipartisan support for bills addressing the increasing power of tech companies. This support would seem to suggest that these pieces of tech legislation have a better chance of advancing than other more controversial legislation. However, even with this broader support, dozens of bills addressing tech industry power have failed recently, leaving America behind a number of other countries in this area. One of the major problems impeding bipartisan progress is that while both parties tend to agree that Congress needs to address the tremendous power that tech companies have, they do not align on the methods the government should use to address the problem. For example, Democrats have called for measures that would compel companies to remove misinformation and other harmful content while Republicans are largely concerned with laws barring companies from censoring or removing content. According to Rebecca Allensworth, a professor at Vanderbilt Law School, the larger issue is that ultimately, “regulation is regulation, so you will have a hard time bringing a lot of Republicans on board for a bill viewed as a heavy-handed aggressive takedown of Big Tech.” Given Congress’ recent track record in moving major pieces of legislation, and powerful opposition from the ad tech industry, the BSSA might be abandoned along with other recent technology legislation.  

What Doesn’t Kill Section 230 Makes it Stronger

By: Marissa Train

Section 230 of the Communications Decency Act, the federal law providing social media platforms with immunity from liability for user generated content, has recently faced objections from politicians on both sides of the aisle. Both parties’ issues stem with the law largely stem from the protection it offers under 230(c), which gives platforms leeway to maintain their own content moderation policies. Democrats largely view those policies as too permissive, causing misinformation to run wild, while Republicans often view the same policies as too restrictive, ‘censoring’ conservative speakers and content.

While many federal proposals to change Section 230 have been introduced, only FOSTA-SESTA, an attempt to stop online sex trafficking, became law. Instead, most of the legislative action has been at the state level, particularly in conservative states

Florida Goes First

In May 2021, Florida Governor Ron DeSantis signed a bill prohibiting social media platforms from suspending political candidates before elections and allowing all users to bring lawsuits against companies if they believe their content moderation is inconsistent. However, when the bill was signed, Eric Goldman, a professor at Santa Clara University Law School, stated that he “see[s] this bill as purely performative, [that] was never designed to be law but simply to send a message to voters.”  

Goldman’s belief that the law would be found unconstitutional was realized in the form of an injunction issued one day before the law came into effect. The Computer and Communications Industry Association (CCIA) and NetChoice, representing Facebook, Youtube, Twitter, and others, had filed suit in Florida … and won. 

In the order issuing a preliminary injunction, Judge Robert Hinkle stated “[t]he legislation now at issue was an effort to rein in social-media providers deemed too large and too liberal. Balancing the exchange of ideas among private speakers is not a legitimate governmental interest.” In short, Judge Hinkle held that the Florida law violates the First Amendment and that it was preempted in large part by Section 230.

Florida appealed the court’s ruling late last year, so now we must wait to see how the Eleventh Circuit rules in the appeal. 

Texas Follows Suit

In March 2021, H.B. 20, also known as the Freedom from Censorship Act, was first introduced to the Texas State Senate. It is widely understood that the bill was a reaction to President Trump’s suspension from every major social media platform after the attack on the U.S. Capitol. Some Texas lawmakers, including Governor Abbott, viewed the suspensions as a direct assault on the sharing of conservative views online. Governor Abbott tweeted such sentiments many times: “Silencing conservative views is un-American, it’s un-Texan, and it’s about to be illegal in Texas.”  

The Texas law essentially prohibits social media platforms with more than 50 million active users from banning users based on political views alone. The law also requires these platforms to create complaint systems for users to appeal removal of their content, and allows Texas residents to file suit against the company if they believe they were wrongfully banned. The bill was enacted in September, and was set to go into effect on December 2. 

The CCIA and NetChoice, the same parties that opposed the Florida law, co-filed a suit against the Freedom from Censorship Act just two weeks after it was enacted. In their complaint, the CCIA and NetChoice allege the law would hamper a social media platform’s ability to stop the spread of hate speech and misinformation, which the groups claim is a violation of the companies’ First Amendment rights.

The complaint states “[a]t a minimum, H.B. 20 would unconstitutionally require platforms like YouTube and Facebook to disseminate, for example, pro-Nazi speech, terrorist propaganda, foreign government disinformation, and medical misinformation.” The complaint details that, “legislators rejected amendments that would explicitly allow platforms to exclude vaccine misinformation, terrorist content, and Holocaust denial.”

A federal district court heard arguments on its constitutionality on November 29, and issued a preliminary injunction preventing all parts of it the plaintiffs challenged from being enforced on December 1. Unlike in the Florida case, the district court did not reach the question of whether H.B. 20 is preempted by Section 230, choosing instead to enjoin the law entirely on the basis of the First Amendment

What’s Next For Section 230?

Section 230 is already facing new challenges from state lawmakers, this time from the left side of the aisle. Democrats in New York state introduced New York S. 7568, a bill that attempts to incentivize platforms like Facebook and YouTube to not amplify certain third-party content. It would make platforms liable for content that leads to ‘imminent lawless action’ or ‘self-harm,’ and for information that is ‘false’ and ‘likely to endanger’ public health when any of that content is promoted by an algorithm. 

While, if passed, this law would easily be found unconstitutional on First Amendment grounds and preempted by Section 230 like the Florida and Texas laws, it signifies the Democrats entering the largely performative arena of state content moderation legislation. 

It seems that what’s next for Section 230 might just be more of the same: performative gestures by state lawmakers in the absence of any further guidance from Congress about a path forward.