Lab-Grown Meat: A Beef History

By Camden Lethcoe

We consume a lot of meat. On any given day, up to 74% of North Americans consume red or processed meat. Additionally, the worldwide consumption of meat has more than quadrupled since 1961. While these statistics might not seem alarming on their face, some additional numbers make the problem a bit more clear. Animal farming accounts for anywhere between 11.1% to 19.6% of the globe’s greenhouse gas emissions and requires roughly a quarter of all farmable land in the world to sustain itself, making our high consumption of meat seem a little more grim, and the need for a solution (or at the very least, mitigation of these effects) more clear. 

Fortunately, lab-grown meat has been in development since 2001, when American businessman Jon Vein filed a patent: “Method For Producing Tissue Engineered Meat for Consumption.” The patent details the utility and viability of lab-grown meat, stating that “livestock feed accounts for approximately 70% for all the wheat, corn, and other grain produced” in the United States, and further, that “to produce one pound of beef, thousands of pounds of water are required.” There’s no doubt that the farming of livestock is a massively resource-intensive operation.

Vein filed his patent in 2001, six years before iPhones existed. Staggering technological advances since then beg the question: why is there still no lab-grown meat in grocery stores? While advancements are certainly underway and scaling costs are decreasing, no lab-grown meat product is currently available for sale in the United States

So how does it work?

Not to be confused with Beyond Meat or Impossible Meat—both of which are plant-based products manufactured by extracting various proteins from plant matter—synthesizing lab-grown meat involves an entirely five-step process. First, a cell sample is harvested from an animal in a non-harmful process. Second, cells are taken and placed in small, sterile environments and fed various supplements and nutrients, allowing them to multiply into the billions or trillions. Third, additional nutrients and proteins are provided to the cells. This is the stage in which the cells begin to resemble what we would typically recognize as “muscle, fat, or connective tissue cells.” Fourth, the cells are harvested. Fifth and finally, the harvested, lab-grown meat is processed. Seems like a pretty refined process. So what’s the hold-up?

Surprisingly, it’s not primarily the fault of regulatory bureaucracy. The biggest hurdles the lab-grown meat industry faces are taste, texture, and commercial scaling. For example, a major problem in the industry regards the feasibility of growing these cells in bulk while simultaneously maintaining a taste and quality akin to real meat—a task much easier to complete in the smaller-scale tests that have thus far been conducted. Scaling this new technology is understandably expensive, as are many technological advances in their infancy. Indeed, the first lab-grown burger cost more than $300,000 to produce! Current retail estimates range as high as $45 per pound of lab-grown meat (compared to a price of roughly $5 for ground beef, as of September of 2023).

Who are the players here?

You might be wondering how regulators in the United States are handling the proliferation of law-grown meat technology. You might also be wondering who is even in charge of such regulation. In the United States, those regulators would be the Food and Drug Administration (FDA) and the U.S. Department of Agriculture (USDA). 

As to how they’re handling it? The two agencies have had a formal agreement in place since 2019, when Congress asked them to create one in order to “delineate each agency’s

responsibilities for regulating cell-cultivated meat.” The agreement largely addresses each agency’s role in the regulation of this industry, with the FDA responsible for, essentially, every inspection and test leading up to the “point of harvest” of the lab-grown meat, at which point the USDA takes over. 

According to the Congressional Research Service, there are currently “more than 150 companies worldwide… involved in the cell-cultivated meat industry, 43 of which are in the United States.” Despite this, only two companies have actually sold lab-grown meat in the United States: UPSIDE Foods, and GOOD Meat. The two companies currently sell lab-grown chicken meat at San Francisco and Washington, D.C. restaurants. While buying a lab-grown dish at one of these locations will set you back at least $70, those who have tried the meat have likened it to real chicken with the most significant difference being that the meat was more uniform in texture than real chicken, which typically has “fatty” and “chewy” elements. 

What does the future hold?

It is clear that lab-grown meat has a long way to go before it reaches the shelves of grocery stores. Encouragingly, though, what currently exists for public consumption at the D.C. and LA restaurants—in however small a scale, and however large a price—reportedly tastes like real meat. So while the future is encouraging, there is no definite answer as to the viability of this developing technology.

Am I redundant? The Impact of Generative AI on Legal Hiring

By: Patrick Paulsen

In the ever-evolving world of law, the advent of generative artificial intelligence (AI) is reshaping traditional practices and methodologies, as well as raising concerns about its prejudices, lack of ethics and regulations, and abilities to make certain person-provided services automated or redundant. Perhaps closest to home for many law students, however, is how the implementation of AI in legal services will change or eliminate the professional roles they hope to occupy post-graduation. To prepare and grapple with the shifts coming to the industry, it is important for aspiring attorneys to understand the size of disruption AI will create who it will impact, and what skills can be prioritized to succeed in the legal workplace of tomorrow.

Large Scale Disruption in Legal Services

While many firms are still in “wait and see” mode regarding generative AI (As of April only 3% of firms had adopted generative AI), experts expect the impact of generative on the legal services industry to be gigantic, and it is not hard to imagine why. With a global market worth around $700 billion, it is no wonder that the legal services industry is ripe for massive gains to be realized through increased efficiency. This opportunity has spurred legal software companies such as Lexis to deliver “hallucination free” legal citations, briefing, and document drafting. Westlaw is not far behind after Thomson Reuters’s (Westlaw’s parent company) recent $650 million acquisition of legal technology company Casetext, Inc.

While players in the legal industry scramble to implement generative AI and outcompete each other, the extent and full impacts of generative AI are currently unknown. A recent Goldman Sachs economics report estimated that 44% of tasks in the legal industry can be automated through generative AI. AI’s potentially high impact on the industry has led to an array of predictions for the near future. Some reports predict record levels of profitability for firms as AI can perform tasks with much higher productivity and accuracy than legal professionals.

On the other hand, consultant reports and industry experts warn that the integration of AI could spell doom on the economic models of law firms. Validatum, a legal pricing consultancy group, notes that accessing and implementing AI technology entails high upfront costs for firms. While the investment in AI will enable firms to process legal work much more effectively and competitively, such gains in productivity eliminate the functionality of the primary source of legal revenue, the billable hour. As stated by Mark McCreary, co-chair of Fox Rothschild’s privacy and data security practice “a lot of risk for the firm—you spend $1 million on a product to take [away] $3 million worth of hours.” Most of the work that is easily automated is currently in the domain of paralegals and younger associates, such as administrative tasks, document review, and contract drafting. This has led industry insiders such as McCreary to express concern about the practice itself, noting that associates may develop fewer skills and that there will likely be a significant reduction in the workforce.

 Young Associate, Paralegal, and In-House Work is Most Vulnerable

One of the areas significantly affected is the hiring process for first-year associates in law firms. With first-year firm hirings already down in 2023, the prospect of automation eliminating jobs is a harsh reality for many aspiring attorneys. With automation already being cited as a reason for firm layoffs, it seems that the opportunities to break into the legal industry may be much sparser. In fact, Deloitte predicts 100,000 legal industry jobs could likely be automated in the next twenty years

In addition to new associates, in-house and corporate counsel work is also likely to be greatly impacted by the integration of generative AI with their workplace. Unlike firms, in-house counsel does not have an incentive to maximize hours, and common tasks such as contract analysis and document review are ripe for automation through AI.

Perhaps the most at risk of disruption in their roles are paralegals. There are over 300,000 paralegal jobs in the United States and the anxiety over future job stability is already mounting. Similar to first-year associates, paralegals are designated tasks such as document review and clerical work which are most at risk of being automated away or transformed through AI integration.

With so much at stake for the professionals who currently fill these roles or plan to in the future, many are asking whether they will be replaced, and if not, what can be done to stay ahead of the curve.

Silver Linings and Skills for the Future

Luckily not everyone believes that shifts will lead to large-scale displacement. Some consultants and managing partners believe that firm structures will not shift radically from pyramids to diamonds and that the transformative power of AI could lead to more high-level or client-facing work for associates earlier in their careers. However, like any new technology, the rise of AI integration in the legal profession means that workers will have to adjust their skillsets.

Zach Warren, Thomson Reuters head of technology and innovation states that due to AI’s ability to create first drafts, “[a]ll the writing you learn in law school will become editing.” The rise of AI in the legal workplaces of course will mean that any aspiring legal professional will have to understand and be able to productively make use of the newly integrated technologies. One such skill that is already being recruited for is that of “prompt engineering.” Because generative AI is dependent upon input and direction from a user, understanding how best to instruct the AI is a key component in putting AI to constructive use. For this reason, bridging the gap between prompt engineering and legal expertise is a must-have skill for legal professionals going forward.

In conclusion, there is no doubt that AI will impact the legal industry immensely, far beyond previous technological advances such as printers and copying machines. However, only the future will reveal whether AI integration will lead to an increase in opportunities in legal services or make many roles redundant. Either way, those aspiring to be attorneys or work in the legal services industry must be proactive and diligent in honing not only traditional legal skills but also in integrating generative AI tools into their practice.

Remote Test Scans Expose Larger Privacy Failures

By: James Ostrowski

In a major challenge to pandemic remote learning practices, the court in Ogletree v. Cleveland State University ruled that scanning students’ rooms violates the Fourth Amendment’s prohibition against unreasonable searches. While this decision is a definitive rebuke of a widely used practice, the case also reveals systemic flaws in university privacy practices. This blog will build off Ogletree to strike a balance between test integrity and privacy rights. 

Covid Acceleration 

For technology companies, the coronavirus pandemic was an accelerant. Startups rushed out messaging apps, video platforms, and ecommerce sites to thaw a populace frozen by a blizzard of lockdowns. There was perhaps no greater market capture for technology companies than in education. Colleges moved entirely online, deploying previously known but relatively new technologies, such as Zoom, on an unprecedented scale. Legions of students attended class from their kitchen tables and bedrooms. Professors, intent on maintaining their in-person standards in a remote world, relied on proctoring tools, many of which required room scans from students who had little choice but to comply. Now, two years later, hundreds of programs still record students throughout remote tests. 

Remote Test Scans Ruled Unconstitutional 

In February 2021, a student at Cleveland State University, Aaron Ogletree, was sitting for a remote chemistry exam when his proctor told him to scan his bedroom. He was surprised. Ogletree assumed the room scan policy had been abolished, until, two hours before the test, Cleveland State emailed him that he would have to scan his room. Ogletree responded that he had sensitive tax documents exposed and could not remove them. Like many students, Ogletree had to stay home due to health considerations, and he could only take exams in the bedroom of his house. Faced with the false choice of complying with the search or failing the test, he panned his laptop’s webcam around his bedroom for the proctor and all the students present to see. 

Ogletree sued Cleveland State for violating his Fourth Amendment rights. The Fourth Amendment protects “[t]he right of the people to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures.” 

Ohio District Court judge J. Philip Calabrese decided in favor of the student because of the heightened Fourth Amendment protection afforded to the home, the lack of alternatives for Ogletree, and the short notice. Calabrese conceded that this intrusion may have been minor, but cited Boyd v. United States to support the slippery slope argument that “unconstitutional practices get their first footing…by silent approaches and slight deviations.” 

The facts of this case are a symptom of a larger problem. The university failed its students and its professors when it did not consistently apply its online education technology. 

Arbitrary Application and Lack of Policies 

Cleveland State provides professors with an arsenal of services to administer online classes. These tools include a plagiarism detection system that faculty can use to see students’ IP addresses, a proctoring service that records students and uses artificial intelligence to flag suspicious behavior, and, of course, pre-test room scans.

The school leaves it entirely to the discretion of faculty members—many of whom are not experts in student privacy—to choose which tools or combinations of tools to use. Cleveland State’s existing policies offer no guidance on the tradeoffs of using any one method. This is tantamount to JetBlue asking its pilots to fly through a whiteout without radar.

Toward a Unified Policy

What may have been an understandable oversight in the early pandemic whirlwind cannot be considered so now. The tension between privacy and security is well-known. Only by careful balancing of students’ privacy rights and university interest in test integrity will we find a workable solution. Schools across the country should take heed of the Ogletree ruling. University leadership holds the responsibility to balancing those interests and impart clear guidance to test administrators. To foster this progression, we offer two recommendations: 

  1. Cost-Benefit Guidance: The university should score tools on privacy interests involved and the expected benefit of its application. This should include guidance on whether a method can be easily circumvented. As individual teachers are not necessarily savvy on the legal implications of certain remote test policies, the university must provide clear analysis and guidance. An example entry may read, “Blackboard provides student location data. Though location tracking is a relatively common practice, students must be made aware of it. This tool can ensure that students are where they say there are, which is not usually relevant for test integrity. If students wished, they could easily evade this using a low-cost VPN.” 
  1. Test Policy Clearly Outlined in Syllabi: Professors should provide guidance within their course descriptions on what technologies and methods are used to administer tests, and students could sign an acknowledgment form. For example, a professor would delineate applications they use to administer exams, information about whether the exams are proctored, and recourse for not following a policy. This way, students can make affirmative decisions about their privacy exposure by choosing a course that aligns with their interests rather than be blindsided by heavy-handed policy in the final weeks of a semester. This way, professors will not have to worry about future disagreements because their students knowingly consented to the course’s policies.

The university must balance policy considerations around security and privacy rights. A failure to balance these conflicting pursuits can cause student anxiety, unnecessary privacy violations, and poor test integrity.

Closing the Loop: Solving the Impossibility of Data Deletion

By: Josephine Laing

Personal information is the newest and shiniest coin of the realm. The more personal the data, the more valuable it may be. While most consumers are aware that their data is worth its weight in gold, it is not always clear who is mining this data and what can be done to protect it. Luckily, efforts have been made to create consumer protections that shine a light on the notorious data broker industry. 

Data brokers collect personal information about consumers. Personal information is not directly gathered from consumers. Rather, personal information is collected from commercial entities, government, and other sources – unbeknownst to the consumer. This data is constantly being sold. For a consumer to track down their personal information, they would have to follow an ever-winding trail of sales between data brokers. As a result, this industry is commonly critiqued for its lack of transparency. While public awareness of this industry is crucial, the key issue is what consumer deletion rights are available to combat the collection. If consumers’ deletion rights are not extended to affect data brokers, deletion rights become meaningless. Meaningless deletion rights prevent consumers from exerting control over their personal information. Consequently, privacy rights are directly linked to one’s ability to require data brokers to delete information. Without this right to delete, there is no true right to privacy. 

The Delete Act 

On October 10th, 2023, California’s Governor Newsom signed the Delete Act into law. The Delete Act promises consumers a new age of data control. Starting in August 2026, California consumers will have the ability to effectively exercise their deletion rights. This might come as a surprise to some, as the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) already granted Californians deletion rights in 2018 and 2020 respectively. These deletion rights, however, were caveated by exceptions that were, until recently, abused by the data broker industry. 

The Delete Act, introduced by Senator Becker and sponsored by Privacy Rights Clearinghouse, amends and adds to Section 1798.99.80-87 of the California Civil Code. These amendments create important changes in the data broker provisions included in the CCPA. The changes embrace a more inclusive definition for data brokers, preventing a notoriously shifty industry from evading jurisdiction. This Act requires data brokers to disclose when they collect personal information about minors, consumers’ precise geolocations, and consumers’ reproductive health care data. Data brokers must also include informational links on their websites about collection techniques and deletion rights. Interestingly, brokers are forbidden from using dark patterns. While data brokers are already required to register in California, the penalty for failing to register has increased to $200 per day from $100. These daily penalties also apply for each deletion request that goes unheeded by the broker. These fines can add up, especially as many consumers in California are ready to make deletion requests.

The Delete Act addresses the Sisyphean task of data management. Consumers are constantly producing data. Thus, the management of data is never-ending. This law includes a provision that makes the deletion right effective. Data brokers must access the deletion mechanism and reassess the mechanism at least once every forty-five days. When a data broker accesses the mechanism, they must: (1) process all deletion requests; (2) direct all service providers or contractors to delete personal information related to the request; (3) send an affirmative representation of deletion to the California Privacy Protection Agency indicating number of records deleted and what service providers or contractors were contacted. After a consumer has submitted a deletion request, data brokers must continue to delete the consumer’s data every forty-five days unless otherwise requested. By requiring monthly engagement with the deletion mechanism, the Act actively protects consumer data.

Who cares? 

Why is this Act necessary? Why weren’t the original deletion rights enough? Through the CPRA’s amendments to the CCPA, California citizens are granted preliminary rights to delete their data. California consumers’ right to delete was limited to data retained by businesses providing services to Californians. And the CCPA only affects businesses that handle 50,000 California consumers, make $25 million in gross revenue, or profit primarily (50% or more) by selling data. This means that if a business qualifies, there are many exceptions the business can claim to avoid facing enforcement. Section 1798.145 outlines the right-to-delete exceptions and allows for businesses to “collect, use, retain, sell, share, or disclose consumers’ personal information that is identified or aggregate consumer information.” 1798.145(a)(6). Such exceptions allow for consumers’ personal information to be excluded from privacy protections. Information can still be used to identify consumers via aggregation efforts. Once the personal data is sold to a data broker (service provider or contractor) the consumer’s right to delete is vastly reduced. Thus, the exceptions carved out for data deletion effectively reduce consumer privacy protections. 

The Delete Act addresses the gaps in consumer privacy by empowering consumers to delete their personal information from data brokers. Since personal information is constantly collected from consumers, expecting consumers to repeatedly delete their information from data brokers is unreasonable. Accordingly, for consumers to efficiently utilize a right to delete they must be able to delete information at scale. The Delete Act calls for the right for consumers to delete “any personal information related” to them “held by the data broker or associated service provider or contractor” through a “single verifiable consumer request.” The bill addresses the persistence of data collection by eliminating the consumer’s need to continually and repetitively request deletion. 
So where is Washington’s Delete Act? Emory Roane of Privacy Rights Clearinghouse hopes that the Delete Act can “serve as an impetus – if not a direct model – for other states to model… [as] there is a massive blind spot when it comes to businesses that don’t have a direct relationship with the consumer.” Emory notes that data brokers are a bipartisan issue, pointing to the passing of data broker registries in both Texas and Oregon in 2023. Washington has yet to establish a data broker registry. Getting to the heart of the issue, Emory states that: “Republican or Democrat, old or young, across the country and across every demographic, everyone rightfully feels like they’ve lost control of their personal information and privacy and data brokers are a huge part of that problem.” Tackling the data broker industry is a tall task, and creating an effective right to delete is a necessary start. As California tries out its deletion portal, Washington should take heed.

Emojis Speak Louder Than Words: A Legal Perspective

By: Lauren Lee

Imagine being legally bound to a contract with nothing more than a ‘thumbs-up’ emoji. In our ever-evolving digital landscape, each new phone software update introduces an array of new emojis and emoticons to our keyboards. These small digital icons serve as time-saving tools, enabling more efficient expression of emotions and tone. However, emojis and emoticons bring forth the challenge of potential ambiguity, as many lack a ‘defined meaning.’ For example, the “praying hands” emoji is sometimes misconstrued as a “high five” emoji. In the legal realm, while interpreting emojis may be complex, their admissibility as evidence in trials holds undeniable importance.

A seemingly uncontroversial smiley face emoji or emoticon can have significant implications on cases. In 2015, U.S. District Judge for the Southern District of New York, Katherine Forrest, ruled that all symbols, including emojis or emoticons, be read by jury members. Tyler Schnoeblen, a linguist at Stanford, explained how the use of emoticons provides insight into a writer’s intention. A smiley face may indicate politeness, a frowning face may signal disapproval, while a winking face may convey flirtatiousness. More recently, in July 2023, the District Court of D.C. ruled that when the Bed, Bath, and Beyond CEO tweeted a smiling moon emoji, it symbolized “to the moon” or “take it to the moon,” reflecting optimism about the company’s stock. This interpretation influenced investors to purchase the stock, and the court found that the moon emoji was actionable.

While civil cases often focus on interpreting emoji meanings rather than their admissibility, attorneys should prepare for litigation by understanding the bar of procedural requirements when submitting emojis for evidence. Texts or messages containing emojis or emoticons must be relevant for presentation to the jury. Testimony from the sender can offer context and highlight the intent of the sender when they send the emoji. Once relevance is established, the messages must be authenticated, with the admitting party ensuring that both the sender and receiver saw the same image.

Already, tens of cases each year in the U.S. address the meaning of emojis in a legal context and some states have permitted the use of emojis as evidence. In a report sponsored by the State Bar of Texas, the authors suggest that emoticons and emojis resemble hearsay statements, which is admissible evidence. Rule 801(d)(2) of the Federal Rules of Evidence (FRE) defines a hearsay statement as an oral, written, or nonverbal assertion that is made outside of trial. Emojis could be admitted as hearsay statements for evidence if authenticated because they likely fall under the written assertion category.

Admitting emojis as evidence in a trial has its challenges. Undoubtedly, expanding the scope of what is permitted as evidence complicates litigation. The downside of allowing emojis as evidence lies in the potential increase in the duration and cost of litigation, increased reliance on the jury or judge’s interpretation of emojis, and potential for parties to evade liability through emoji use. Additionally, emojis may appear differently on different devices (e.g. Apple products vs. Androids). Admitting emojis as evidence might also lead to unintended agreements or commitments.Despite the increasing complexity of emoji interpretation, their admissibility in trials should be acknowledged. Emojis expand our means of expression and can play a crucial role in conveying nuanced emotional and contextual information, fostering more accurate communication within the legal system. It is vital to understand that language should not be interpreted solely within its plain meaning but also in the context in which it is used. This concept is similar to statutory interpretation canons in administrative law, where various interpretive modes are employed to derive meaning. Emojis and emoticons, in this context, can be likened to symbols that effectively convey ideas and the author’s tone, making them a significant component for establishing contextual evidence in cases. To prepare for the ever-expanding use of emojis and emoticons, courts and attorneys should deploy appropriate tools to develop fluency in this new ‘emoji language.’