Skepticism:  Should “The Nine Greatest Experts on the Internet” be taking more social media cases?

By: Kevin Vu

Last year, in one of the oral arguments for cases about Section 230 of the Communications Decency Act (Section 230), Justice Kagan opined that the United States Supreme Court is not comprised of “the nine greatest experts on the internet.”  Despite that observation, and eventually siding with the government in those Section 230 cases, the Court granted certiorari to four cases this year that again concern the regulation of social media.  Two of those cases, Moody, et al. v. NetChoice, LLC, et al. and NetChoice, LLC et al. v. Paxton, are concerning newly passed state laws in Florida and Texas that purport to regulate social media companies.  Additionally, the court recently heard oral arguments on two other cases, O’Connor Ratcliff v. Garnier and Lindke v. Freede, concerning whether elected officials can block members of the public on social media.  These cases—and Justice Kagan’s observations—beg the question of why the Court would be inclined to answer questions about the internet and social media companies, despite being self-admitted non-experts. 

  1. The current cases.

To provide some background, the Court is considering four cases this term.  The NetChoice cases concern whether Florida and Texas laws can prohibit social media companies from censoring individuals, or refusing to give a platform to political candidates.  But both laws are slightly different; the Florida law would require social media platforms to provide a rationale for removing content or censoring individuals on their platforms.  In contrast, the Texas law would bar large platforms, like Facebook, from engaging in “viewpoint discrimination.”  The Court is considering whether those state laws violate a company’s First Amendment rights.  

The other two cases, O’Connor Ratcliffe and Lindke, concern whether an elected official blocking users on social media would amount to a First Amendment violation of the user’s rights.  The elected officials cases have been presented to the Court before.  During the 2020 presidential election, then-President Trump blocked users on his Twitter account, but the Court dismissed Trump’s case as moot when President Trump lost the election.  

  1. Possible reasons the Court is considering social media cases.

Several reasons could explain the Court’s consideration of the NetChoice and elected officials’ cases.  First, unlike the Section 230 cases, the cases in this term implicate First Amendment rights.  The Section 230 cases concerned whether the plaintiffs could hold social media companies liable for content users posted on their websites, and did not present any such First Amendment issues.  As noted above, the NetChoice and elected officials cases present First Amendment concerns, and the Section 230 cases in contrast did not present those questions.  Because First Amendment questions have well-established case law and principles to answer the question presented, the Court may be more interested in wading into how its precedent affects these giant social media companies, especially as other branches of government have failed to address those companies.  

As social media websites continue to be leveraged to spread misinformation, and distrust in those platforms grows, the Court could be seeing an opportunity to weigh in on questions the other branches of the federal government have failed to address.  Despite bipartisan efforts to introduce bills regulating social media, those efforts have languished especially as uncertainty looms over whether government shutdowns are imminent.  But Congress’s inaction is not the only sign that the Court seems to want to consider cases regarding social media companies and the Internet.  This past month, the Court lifted a lower court’s restriction on President Biden’s administration.  Biden had attempted to alert social media companies to content that violated the company’s policies, and several state Attorneys General and social media users sued the Biden administration.  Those parties argued that Biden suppressed disfavored political speech, such as claims of election fraud, and information about the COVID-19 pandemic.  In allowing the Biden administration to contact social media companies in that way, the Court could be signaling its interest in cases involving social media companies.  

Finally, the Court’s public rationale for granting review weighs in favor of taking these social media cases.   The Court ordinarily only grants review to cases that could have “national significance, might harmonize conflicting decisions in the federal Circuit courts,” or if the case “could have precedential value.”  These social media cases generally meet all three of the criteria.  First, these are the kinds of cases that have national significance:  Can states regulate the speech of giant social media companies?  If states can regulate these national and international companies, what would happen if separate states impose different restrictions or requirements on those companies?  And, can elected officials restrict a user’s access to their social media accounts?  Those issues will have a profound impact on how social media companies regulate their multi-million user platforms.  

Second, with regards to the NetChoice cases, there are two conflicting decisions in the federal courts.  The Florida law was struck down by the Eleventh Circuit, while the Texas law was upheld by the Fifth Circuit.  Because those laws are similar enough in nature, the Court ultimately needs to resolve the NetChoice cases to determine whether a state can instruct social media companies on how to regulate, or not regulate, their content and users.  

And third, these cases are likely to have a profound effect on state policy and legislative decisions that have national effect.  For example, last year the Court held that California could forbid the sale of pork produced in a cruel manner.  That pork case has similar implications as the NetChoice cases; whether a state can essentially regulate an entire industry.  All of these cases will also provide precedential value:  the Court has been presented with questions of first impression as to how the First Amendment applies in various social media contexts.  The Court’s decisions will have a profound effect on how social media companies are run. 

Ultimately, these are the kinds of cases and questions that the Court must answer for prudential reasons.  As the public grows skeptical of social media companies, decisive action needs to be taken.  The lack of action from the other branches of federal government, along with the actions taken by the state governments in some of these cases, presents the following question: what branch of government should be taking action?

From Gladiators to Swifties: Regulating Ticket Resales

By: Harley Salter

Back when Shakespeare was putting on plays at his newly constructed Globe, there were affordable tickets for everyday people, commonly referred to as “groundlings” because the cheapest tickets were for the ground level. Even two thousand years ago, when the Colosseum opened its doors, everyone in Rome, including women, peasants, and enslaved Romans, was invited for free to watch the gladiator battles. Despite the much larger entertainment industry of the modern day, many people cannot afford tickets, if they can even access them. 

On November 15, 2022, Taylor Swift propelled the difficulty and cost of buying tickets into the spotlight when the release of her Eras Tour tickets led to Ticketmaster crashing. The drama did not end there. Scalpers, who had purchased large numbers of tickets, listed them on resale sites for thousands of dollars. According to Business Insider, the average ticket price for a secondary ticket was $3,801, a 1,402% increase from their original listed price of $253.56. Taylor Swift is unique with her massive and adoring fan base, but she is not alone with extreme price increases on secondary tickets. 

Why have prices increased? 

Many factors have contributed to the increase in ticket prices. For example, Ticketmaster has a monopoly on ticket sales, which gives it the power to increase prices. Additionally, ticket brokers have been using exclusive dealing arrangements with venues and artists, which further their ability to raise prices because they have no competition. Finally, it was fewer than twenty years ago that the music industry began shifting to live music as a substantial source of revenue to replace the income lost to streaming. Bots have also played a large role in recent years. For a long time, scalpers, or “ticket sharks,” have bought tickets in bulk to events for cheap and resold them, usually day of and in front of the venue, for a substantial profit. With the emergence of online ticket sales, scalping became easier and more people started participating. Online ticketing benefited consumers, as those who could no longer make it to the show could resell their tickets. Although this smaller scale reselling impacted resale ticket prices, bots completely changed the game. Bots are used by individuals and companies to buy tickets in bulk with more ease and at a faster rate. This drastically changed the scale at which tickets were marked up and resold, thereby driving up the cost of tickets. 

What can be done? 

Companies often try to combat bots with software to block them and formal agreements prohibiting their use. This is not always sufficient; bots are constantly improved upon to go undetected and break past barriers. Although bots can be hard to block, regulations, as well as actions by ticket brokers, could substantially lessen their impact on resale ticket prices.

While a blanket ban on ticket resales could be implemented, it would likely have substantial adverse effects. A more nuanced approach would be more effective. If ticket resales are banned, consumers who bought tickets with the intention of going but were unable to attend would be unable to recoup their losses. This harm to consumers could be mitigated by requiring ticket brokers to accept returns. However, unlike other products, tickets are only sold for a limited time. If ticket brokers were required to accept returns, they would likely sustain large losses because they often wouldn’t have the time to sell the ticket again. This could deter competition and lead to increased prices. 

In Washington State, Representative Kristine Reeves proposed the TSWIFT Consumer Protection Act, which would require professional ticket resellers to obtain a ticket sales license for Washington’s Department of Licensing. The bill would also cap the price at 110% of the original sales price and expand Washington’s current prohibitions on using bots or software to purchase tickets. In Texas, Governor Greg Abbott signed a similar bill, “Save our Swifties,” into law that banned the use of bots to purchase concert tickets. Other states have enacted laws that require transparency with ticket prices and fees. 

Halfway around the world, when visitors could no longer access affordable tickets to see the Colosseum, Italy’s culture ministry stepped in. Following the pandemic, many sights in Europe began offering or requiring online ticket sales. Third parties like TripAdvisor began buying up tickets to major attractions as they became available then selling them at extreme markups, resulting in many tourists leaving without being able to get into places they came to see. The ministry took this seriously; it launched an investigation last summer and implemented a new ticket sales system in October. Under this new system, tickets have visitors’ names on them, entry requires a valid ID, and tickets are reserved to be sold in person again. 

What is the issue if people are willing to pay? 

Attending large gatherings may be a privilege, but it is deeply ingrained in our history and an important part of society. Although some are able and willing to pay large sums of money, the current market for tickets does not reflect common notions of supply and demand. Concert-goers no longer compete against each other for the limited supply; rather, scalpers and bots come in and artificially lower the supply, driving up the price. Ticketmaster acts as the initial ticket reseller, buying tickets from venues and performers. It then resells tickets to the public. Unlike the secondary ticket sellers one step further down the chain, the original brokers supply value by making the process easier and more efficient. Secondary ticket sellers however simply resell the exact same product, often on the same platform they bought the tickets from, adding no additional value to consumers. Bots allow them to do this on such a large scale that it artificially drives up prices to sometimes astronomical levels. 

While online sales originally increased access to events, bot-driven scalping has widened the economic gap for access to shows. Bots have transformed ticket resales from a risky business to a game-changing profit maker. Although regulations on bots may be difficult to enforce currently, the people and companies controlling the bots can and should be regulated. Limiting the price of resale tickets and prohibiting the use of bots could drastically curb the soaring price of tickets and give a wider range of consumers access to live entertainment. Public access to entertainment should not price out the average fan. Common sense regulation can swing the pendulum in the other direction, though gladiator battles free to the public are likely not coming back any time soon.

Am I redundant? The Impact of Generative AI on Legal Hiring

By: Patrick Paulsen

In the ever-evolving world of law, the advent of generative artificial intelligence (AI) is reshaping traditional practices and methodologies, as well as raising concerns about its prejudices, lack of ethics and regulations, and abilities to make certain person-provided services automated or redundant. Perhaps closest to home for many law students, however, is how the implementation of AI in legal services will change or eliminate the professional roles they hope to occupy post-graduation. To prepare and grapple with the shifts coming to the industry, it is important for aspiring attorneys to understand the size of disruption AI will create who it will impact, and what skills can be prioritized to succeed in the legal workplace of tomorrow.

Large Scale Disruption in Legal Services

While many firms are still in “wait and see” mode regarding generative AI (As of April only 3% of firms had adopted generative AI), experts expect the impact of generative on the legal services industry to be gigantic, and it is not hard to imagine why. With a global market worth around $700 billion, it is no wonder that the legal services industry is ripe for massive gains to be realized through increased efficiency. This opportunity has spurred legal software companies such as Lexis to deliver “hallucination free” legal citations, briefing, and document drafting. Westlaw is not far behind after Thomson Reuters’s (Westlaw’s parent company) recent $650 million acquisition of legal technology company Casetext, Inc.

While players in the legal industry scramble to implement generative AI and outcompete each other, the extent and full impacts of generative AI are currently unknown. A recent Goldman Sachs economics report estimated that 44% of tasks in the legal industry can be automated through generative AI. AI’s potentially high impact on the industry has led to an array of predictions for the near future. Some reports predict record levels of profitability for firms as AI can perform tasks with much higher productivity and accuracy than legal professionals.

On the other hand, consultant reports and industry experts warn that the integration of AI could spell doom on the economic models of law firms. Validatum, a legal pricing consultancy group, notes that accessing and implementing AI technology entails high upfront costs for firms. While the investment in AI will enable firms to process legal work much more effectively and competitively, such gains in productivity eliminate the functionality of the primary source of legal revenue, the billable hour. As stated by Mark McCreary, co-chair of Fox Rothschild’s privacy and data security practice “a lot of risk for the firm—you spend $1 million on a product to take [away] $3 million worth of hours.” Most of the work that is easily automated is currently in the domain of paralegals and younger associates, such as administrative tasks, document review, and contract drafting. This has led industry insiders such as McCreary to express concern about the practice itself, noting that associates may develop fewer skills and that there will likely be a significant reduction in the workforce.

 Young Associate, Paralegal, and In-House Work is Most Vulnerable

One of the areas significantly affected is the hiring process for first-year associates in law firms. With first-year firm hirings already down in 2023, the prospect of automation eliminating jobs is a harsh reality for many aspiring attorneys. With automation already being cited as a reason for firm layoffs, it seems that the opportunities to break into the legal industry may be much sparser. In fact, Deloitte predicts 100,000 legal industry jobs could likely be automated in the next twenty years

In addition to new associates, in-house and corporate counsel work is also likely to be greatly impacted by the integration of generative AI with their workplace. Unlike firms, in-house counsel does not have an incentive to maximize hours, and common tasks such as contract analysis and document review are ripe for automation through AI.

Perhaps the most at risk of disruption in their roles are paralegals. There are over 300,000 paralegal jobs in the United States and the anxiety over future job stability is already mounting. Similar to first-year associates, paralegals are designated tasks such as document review and clerical work which are most at risk of being automated away or transformed through AI integration.

With so much at stake for the professionals who currently fill these roles or plan to in the future, many are asking whether they will be replaced, and if not, what can be done to stay ahead of the curve.

Silver Linings and Skills for the Future

Luckily not everyone believes that shifts will lead to large-scale displacement. Some consultants and managing partners believe that firm structures will not shift radically from pyramids to diamonds and that the transformative power of AI could lead to more high-level or client-facing work for associates earlier in their careers. However, like any new technology, the rise of AI integration in the legal profession means that workers will have to adjust their skillsets.

Zach Warren, Thomson Reuters head of technology and innovation states that due to AI’s ability to create first drafts, “[a]ll the writing you learn in law school will become editing.” The rise of AI in the legal workplaces of course will mean that any aspiring legal professional will have to understand and be able to productively make use of the newly integrated technologies. One such skill that is already being recruited for is that of “prompt engineering.” Because generative AI is dependent upon input and direction from a user, understanding how best to instruct the AI is a key component in putting AI to constructive use. For this reason, bridging the gap between prompt engineering and legal expertise is a must-have skill for legal professionals going forward.

In conclusion, there is no doubt that AI will impact the legal industry immensely, far beyond previous technological advances such as printers and copying machines. However, only the future will reveal whether AI integration will lead to an increase in opportunities in legal services or make many roles redundant. Either way, those aspiring to be attorneys or work in the legal services industry must be proactive and diligent in honing not only traditional legal skills but also in integrating generative AI tools into their practice.

Remote Test Scans Expose Larger Privacy Failures

By: James Ostrowski

In a major challenge to pandemic remote learning practices, the court in Ogletree v. Cleveland State University ruled that scanning students’ rooms violates the Fourth Amendment’s prohibition against unreasonable searches. While this decision is a definitive rebuke of a widely used practice, the case also reveals systemic flaws in university privacy practices. This blog will build off Ogletree to strike a balance between test integrity and privacy rights. 

Covid Acceleration 

For technology companies, the coronavirus pandemic was an accelerant. Startups rushed out messaging apps, video platforms, and ecommerce sites to thaw a populace frozen by a blizzard of lockdowns. There was perhaps no greater market capture for technology companies than in education. Colleges moved entirely online, deploying previously known but relatively new technologies, such as Zoom, on an unprecedented scale. Legions of students attended class from their kitchen tables and bedrooms. Professors, intent on maintaining their in-person standards in a remote world, relied on proctoring tools, many of which required room scans from students who had little choice but to comply. Now, two years later, hundreds of programs still record students throughout remote tests. 

Remote Test Scans Ruled Unconstitutional 

In February 2021, a student at Cleveland State University, Aaron Ogletree, was sitting for a remote chemistry exam when his proctor told him to scan his bedroom. He was surprised. Ogletree assumed the room scan policy had been abolished, until, two hours before the test, Cleveland State emailed him that he would have to scan his room. Ogletree responded that he had sensitive tax documents exposed and could not remove them. Like many students, Ogletree had to stay home due to health considerations, and he could only take exams in the bedroom of his house. Faced with the false choice of complying with the search or failing the test, he panned his laptop’s webcam around his bedroom for the proctor and all the students present to see. 

Ogletree sued Cleveland State for violating his Fourth Amendment rights. The Fourth Amendment protects “[t]he right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” 

Ohio District Court judge J. Philip Calabrese decided in favor of the student because of the heightened Fourth Amendment protection afforded to the home, the lack of alternatives for Ogletree, and the short notice. Calabrese conceded that this intrusion may have been minor, but cited Boyd v. United States to support the slippery slope argument that “unconstitutional practices get their first footing…by silent approaches and slight deviations.” 

The facts of this case are a symptom of a larger problem. The university failed its students and its professors when it did not consistently apply its online education technology. 

Arbitrary Application and Lack of Policies 

Cleveland State provides professors with an arsenal of services to administer online classes. These tools include a plagiarism detection system that faculty can use to see students’ IP addresses, a proctoring service that records students and uses artificial intelligence to flag suspicious behavior, and, of course, pre-test room scans.

The school leaves it entirely to the discretion of faculty members—many of whom are not experts in student privacy—to choose which tools or combinations of tools to use. Cleveland State’s existing policies offer no guidance on the tradeoffs of using any one method. This is tantamount to JetBlue asking its pilots to fly through a whiteout without radar.

Toward a Unified Policy

What may have been an understandable oversight in the early pandemic whirlwind cannot be considered so now. The tension between privacy and security is well-known. Only by careful balancing of students’ privacy rights and university interest in test integrity will we find a workable solution. Schools across the country should take heed of the Ogletree ruling. University leadership holds the responsibility to balancing those interests and impart clear guidance to test administrators. To foster this progression, we offer two recommendations: 

  1. Cost-Benefit Guidance: The university should score tools on privacy interests involved and the expected benefit of its application. This should include guidance on whether a method can be easily circumvented. As individual teachers are not necessarily savvy on the legal implications of certain remote test policies, the university must provide clear analysis and guidance. An example entry may read, “Blackboard provides student location data. Though location tracking is a relatively common practice, students must be made aware of it. This tool can ensure that students are where they say there are, which is not usually relevant for test integrity. If students wished, they could easily evade this using a low-cost VPN.” 
  1. Test Policy Clearly Outlined in Syllabi: Professors should provide guidance within their course descriptions on what technologies and methods are used to administer tests, and students could sign an acknowledgment form. For example, a professor would delineate applications they use to administer exams, information about whether the exams are proctored, and recourse for not following a policy. This way, students can make affirmative decisions about their privacy exposure by choosing a course that aligns with their interests rather than be blindsided by heavy-handed policy in the final weeks of a semester. This way, professors will not have to worry about future disagreements because their students knowingly consented to the course’s policies.

The university must balance policy considerations around security and privacy rights. A failure to balance these conflicting pursuits can cause student anxiety, unnecessary privacy violations, and poor test integrity.

Closing the Loop: Solving the Impossibility of Data Deletion

By: Josephine Laing

Personal information is the newest and shiniest coin of the realm. The more personal the data, the more valuable it may be. While most consumers are aware that their data is worth its weight in gold, it is not always clear who is mining this data and what can be done to protect it. Luckily, efforts have been made to create consumer protections that shine a light on the notorious data broker industry. 

Data brokers collect personal information about consumers. Personal information is not directly gathered from consumers. Rather, personal information is collected from commercial entities, government, and other sources – unbeknownst to the consumer. This data is constantly being sold. For a consumer to track down their personal information, they would have to follow an ever-winding trail of sales between data brokers. As a result, this industry is commonly critiqued for its lack of transparency. While public awareness of this industry is crucial, the key issue is what consumer deletion rights are available to combat the collection. If consumers’ deletion rights are not extended to affect data brokers, deletion rights become meaningless. Meaningless deletion rights prevent consumers from exerting control over their personal information. Consequently, privacy rights are directly linked to one’s ability to require data brokers to delete information. Without this right to delete, there is no true right to privacy. 

The Delete Act 

On October 10th, 2023, California’s Governor Newsom signed the Delete Act into law. The Delete Act promises consumers a new age of data control. Starting in August 2026, California consumers will have the ability to effectively exercise their deletion rights. This might come as a surprise to some, as the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) already granted Californians deletion rights in 2018 and 2020 respectively. These deletion rights, however, were caveated by exceptions that were, until recently, abused by the data broker industry. 

The Delete Act, introduced by Senator Becker and sponsored by Privacy Rights Clearinghouse, amends and adds to Section 1798.99.80-87 of the California Civil Code. These amendments create important changes in the data broker provisions included in the CCPA. The changes embrace a more inclusive definition for data brokers, preventing a notoriously shifty industry from evading jurisdiction. This Act requires data brokers to disclose when they collect personal information about minors, consumers’ precise geolocations, and consumers’ reproductive health care data. Data brokers must also include informational links on their websites about collection techniques and deletion rights. Interestingly, brokers are forbidden from using dark patterns. While data brokers are already required to register in California, the penalty for failing to register has increased to $200 per day from $100. These daily penalties also apply for each deletion request that goes unheeded by the broker. These fines can add up, especially as many consumers in California are ready to make deletion requests.

The Delete Act addresses the Sisyphean task of data management. Consumers are constantly producing data. Thus, the management of data is never-ending. This law includes a provision that makes the deletion right effective. Data brokers must access the deletion mechanism and reassess the mechanism at least once every forty-five days. When a data broker accesses the mechanism, they must: (1) process all deletion requests; (2) direct all service providers or contractors to delete personal information related to the request; (3) send an affirmative representation of deletion to the California Privacy Protection Agency indicating number of records deleted and what service providers or contractors were contacted. After a consumer has submitted a deletion request, data brokers must continue to delete the consumer’s data every forty-five days unless otherwise requested. By requiring monthly engagement with the deletion mechanism, the Act actively protects consumer data.

Who cares? 

Why is this Act necessary? Why weren’t the original deletion rights enough? Through the CPRA’s amendments to the CCPA, California citizens are granted preliminary rights to delete their data. California consumers’ right to delete was limited to data retained by businesses providing services to Californians. And the CCPA only affects businesses that handle 50,000 California consumers, make $25 million in gross revenue, or profit primarily (50% or more) by selling data. This means that if a business qualifies, there are many exceptions the business can claim to avoid facing enforcement. Section 1798.145 outlines the right-to-delete exceptions and allows for businesses to “collect, use, retain, sell, share, or disclose consumers’ personal information that is identified or aggregate consumer information.” 1798.145(a)(6). Such exceptions allow for consumers’ personal information to be excluded from privacy protections. Information can still be used to identify consumers via aggregation efforts. Once the personal data is sold to a data broker (service provider or contractor) the consumer’s right to delete is vastly reduced. Thus, the exceptions carved out for data deletion effectively reduce consumer privacy protections. 

The Delete Act addresses the gaps in consumer privacy by empowering consumers to delete their personal information from data brokers. Since personal information is constantly collected from consumers, expecting consumers to repeatedly delete their information from data brokers is unreasonable. Accordingly, for consumers to efficiently utilize a right to delete they must be able to delete information at scale. The Delete Act calls for the right for consumers to delete “any personal information related” to them “held by the data broker or associated service provider or contractor” through a “single verifiable consumer request.” The bill addresses the persistence of data collection by eliminating the consumer’s need to continually and repetitively request deletion. 
So where is Washington’s Delete Act? Emory Roane of Privacy Rights Clearinghouse hopes that the Delete Act can “serve as an impetus – if not a direct model – for other states to model… [as] there is a massive blind spot when it comes to businesses that don’t have a direct relationship with the consumer.” Emory notes that data brokers are a bipartisan issue, pointing to the passing of data broker registries in both Texas and Oregon in 2023. Washington has yet to establish a data broker registry. Getting to the heart of the issue, Emory states that: “Republican or Democrat, old or young, across the country and across every demographic, everyone rightfully feels like they’ve lost control of their personal information and privacy and data brokers are a huge part of that problem.” Tackling the data broker industry is a tall task, and creating an effective right to delete is a necessary start. As California tries out its deletion portal, Washington should take heed.