Liability Theater: Can Live Theater Reopen in the Time of COVID-19?

By: Paige Gagliardi

“The show must go on!” …But can it during a pandemic?

The novel coronavirus presents a slew of new barriers for the American live entertainment industry. Broadway has been shut down since March 12and will remain so until June 2021. Over 23,000 live events are cancelled, and an estimated 90 percent of independent venues are expected to permanently close as the entertainment industry faces a loss in revenue of over $160B. While artists and venue owners alike cry out for a safe way to reopen, to return to the pre-pandemic model of live theater could expose venues to an exorbitant amount of liability.

Since the initial industry shut down, unique forms of entertainment production and consumption have emerged. From filming on “closed sets”, to record-breaking streaming of taped Broadway productions, to drive-in concerts, the entertainment industry has begun to adapt. But due to the unique conditions posed by indoor auditorium seating, backstage work, and performance, many concert halls, stadiums, and historic theaters remain closed. In an attempt to provide consistency and to cope with the ever-changing state regulations about reopening measures, entertainment industry unions such as IATSE (The International Alliance of Theatrical Stage Employees) released recovery plans for future live events. In their the 27-page document, IATSE stated their new guidelines for their union venues. These new guidelines require a designated COVID-19 Compliance Officer (very similar to those now seen on every Hollywood set) that oversees and monitors the adherence to protocols and training. Further, the guidelines require a venue to have a written COVID-19 safety plan, reduced personnel, diagnostic testing, daily screening, adequate ventilation, health-safety education, touchless ticket scanning, reduced patron capacity, and paid sick leave for staff pursuant to the Families First Corona Response Act.

But are these internal measures enough to decrease the spread of COVID-19 in a live entertainment environment? They may not be enough in the midst of union infighting and industry turmoil. While the leaders of 12 major unions met in solidarity in May, currently SAG-AFTRA and Actors’ Equity Association, two of the nation’s largest performing arts unions, are now locked in a jurisdictional dispute over the whose territory, meaning whose regulations and personnel, the taping of live theater productions belongs to. And although twelve states have begun enacting legislation to narrow the liability limits related to and stemming from COVID-19, it does not absolve employers of their duty to maintain safe operations for workers and customers.

Under the Occupational Safety and Health Act (OSHA), the employer has a legal obligation to provide a safe and healthful workplace.  Because OSHA does not have any specific regulation regarding the virus, it would fall under the employer’s duty of care. The “General Duty Clause,” Section 5(a)(1) of the Act, requires an employer to protect its employees against “recognized hazards” to safety or health which may cause serious injury or death. For this to be applied to COVID-19 is no obscure fear; although tracking ground-zero of an illness can be hard to prove, companies such as Princess Cruise Lines, Walmart, and at least three elderly care facilities already face wrongful death lawsuits. So, unless the issue of liability over COVID-19 transmission is addressed in future legislation, unions and venue owners must proactively seek to limit any potential liability. Patrons may need to sign a participation waiver before entering, or their ticket may include waiver for any claims arising from the transmission of a communicable disease (just as the back of a baseball ticket traditionally contains a waiver of liability for any physical injuries sustained due to a foul ball).

All that said, unless something changes quickly, the live entertainment industry as we know it could become another casualty of this pandemic. As entertainment lawyer Jason P. Baruch of Sendroff & Baruch, LLP put: theater “is not likely to be economically viable with social-distancing requirements in place that cull audiences by half or more…With the exception of the occasional one-person show, concert or small play, most [shows] simply will not be producible until the theaters can be filled again.

Live theater, with its earliest record in Western history being the 6th century BCE, has survived revolution, oppression and disease; so while there is no doubt live theater will return, upon reopening, it will confront many legal and economic challenges. Live theater may never be the same, and the government and venue responses to these issues of liability will shape how live theater survives and determine when it will flourish again.

What Does Washington’s New Non-Compete Law Have in Store for the Tech Industry?

By: Shelly Mittal

What Does Washington’s New Non-Compete Law Have in Store for the Tech Industry?

The rivalry between Amazon and Google is often on display. One area where we recently saw this rivalry spread its tentacles was in attracting talent. When Google hired Amazon’s marketing executive Brian Hall in April of this year, Amazon decided to enforce a non-competition agreement against him. This caused quite a splash in the industry.

Enforcing non-competition agreements against former employees is not a new trend in the tech industry. Amazon, itself, has brought a series of lawsuits to enforce such agreements including one against Philip Moyer, a former Amazon Web Services sales executive, who took a job with Google Cloud last year. 

Non-competition agreements (often called non-competes) are essentially contracts/clauses in employment agreements that prohibit employees from joining competitors or starting a competing firm for a specified amount of time and in a specified geographic region to protect trade secrets, client lists, and other intangible assets. They have always been particularly controversial in the tech industry, which faces challenges in structuring non-competes that balance attracting talent and protecting sensitive information with preventing unfair competition by former employees.

Many states have developed common law, through court decisions, that govern non-competes, while others have enacted statutes. Washington courts, too, formulated the reasonableness standard, wherein non-competes were enforced if they were reasonable in scope, geographic reach, and duration, as determined on a case-by-case basis. On the other hand, non-competes are unenforceable in states like California, the heart of the global tech industry.  Opponents of non-competes credit this approach to the growth of Silicon Valley, which required a liberal flow of employees from big tech companies to startups. Washington passed a new law this year which restricts non-competes (w.e.f. Jan 1, 2020). While the new law does not go so far as to ban non-competes, it does impose new restrictions.  Under the new law, non-competes will only be enforceable if (a) the employer discloses the terms of the covenant in writing when making an offer or earlier; (b) the employee earns more than $100,000 a year; and (c) the non-compete is enforceable for a period not longer than 18 months.

So how does the salary threshold affect tech employees?

As reported by the 2019 Hired report, tech salaries have been on the rise in Seattle, with the average pay jumping from $125,000 in 2015 to $138,000 in 2018. The report suggests a 10% jump from 2015 to 2018. Based on the average salary and the suspected jump, it is safe to say that most of Washington’s tech employees fall above the $100,000 salary threshold in Washington’s new non-compete law. This means two things for the Washington tech industry: first, the salary threshold does not exempt most Washington tech employees from being bound by non-competes; and  second, since employers can enforce non-competes against their employees, they will be less likely to use premium constraints like bonuses or stock compensation to restrict employee mobility. So, while cafeteria workers and receptionists (non-tech employees) at big tech companies will be free to leave and start their own ventures, those with tech expertise will, most often, have to wait.

The low salary threshold may also make things more difficult for startups. Because Washington’s new salary threshold is low in comparison to a regular tech salary, startups may struggle to recruit great talent from larger tech companies, which will be able to effectively restrict their employees through non-competes.In other words, Washington’s new non-compete statute is an imperfect law for startups: its salary threshold can potentially free some tech employees to take new jobs or go out on their own, but by and large, it can help big tech companies to restrict employees.  

So is it all bad?

Although a higher salary threshold would have been a better choice for the tech industry, the predictability that the new law provides by defining the ‘reasonable’ standard and taking discretion away from the courts can help Washington’s tech industry continue to grow. It can reduce both un-assessable risks and potential litigation costs for both employers and employees. Less focus and expenditure on these concerns will likely result in a more profitable employment market and could foster industry growth. Therefore, although the new law does not eliminate the enforceability of non-competes for techies in Washington, the present low rate of enforcement combined with the certainty that the new law brings is a welcome positive change.

Peeved with Your Pre-Order? Legal Solutions to Videogame False Advertising

By: Moses Merakov

It’s no secret that videogame publishers and their developers often make broad, sweeping claims about their game in an effort to sell their product. Regardless of whether that takes the form of tangibly misrepresenting the graphical fidelity of the game or omitting that the game will use predatory microtransactions, many gamers have become disenchanted with the industry and large triple A game publishers. A consumer that pre-orders a title may get a wildly different product upon release from what was initially promised. Is there any legal recourse for these consumers?

Potentially yes. A consumer can sue for false advertising. Federally, a consumer can make a claim under section five of the Federal Trade Commission Act (FTC Act), which states that “unfair or deceptive acts or practices in or affecting commerce” are declared illegal. On a state level, many states have statutes parallel to the FTC Act or at least allow consumers to pursue common law based false advertising claims.

In Washington State, the legislature enacted the Consumer Protection Act, which similarly codified that “[u]nfair methods of competition and unfair or deceptive acts or practices in the conduct of any trade or commerce are” unlawful (RCW 19.86.020). To establish a claim under the Washington Consumer Protection Act (CPA), a plaintiff must prove five elements: (1) an unfair or deceptive act or practice that (2) affects trade or commerce and (3) impacts the public interest, and (4) the plaintiff sustained damage to business or property that was (5) caused by the unfair or deceptive act or practice. Hangman Ridge Training Stables, Inc. v. Safeco Title Ins. Co. (1986).  “Failure to satisfy even one of the elements is fatal to a CPA claim.” Sorrel v. Eagle Healthcare (2002). While each element carries its own set of particular criteria and micro-elements, courts generally construe such statutes as liberally as possible in order to protect consumers from conniving sellers. Panag v. Farmers Ins. Co. of Washington.

If the courts and legislature are so friendly to consumers, and the claims are seemingly easy to pursue, why are there a distinct lack of false advertising claims/cases in the videogame industry? Many customers simply don’t feel it is necessary to pursue a legal claim because their 60-dollar game turned out to not carry the particular features that were promised. The cost of litigation far outweighs the $60. It would likely take the coordinated effort of a law firm amalgamating thousands of consumers into a class-action lawsuit to make a claim against a videogame company profitable.

Additionally, even assuming gamers successfully band together to bring a lawsuit, publishers and their related developers are often careful with their marketing phrasing as to avoid false advertising claims. A notorious example involves the videogame Crash Team Racing Nitro-Fueled. In 2019, Publisher Activision and developer Beenox repackaged a 20–year-old game, Crash Team Racing, releasing a rehashed version with minor content additions and graphical improvements. Knowing that many gamers are distraught with modern gaming’s extensive use of microtransactions and hoping to solidify that the updated version will stay true to its roots, a member of the Nitro-Fuel team claimed in an E3 convention presentation that the entire game would avoid microtransactions. According to Beenox, the game would offer new content for free during the game’s lifecycle.

The game eventually released and there were no microtransactions. However, only months after the game released, Activision/Beenox introduced microtransactions to the game and changed certain game mechanics to encourage consumers to pay additional money to obtain in-game content faster. Consumers were met with a seemingly different product than what was advertised to them. While this certainly angered many consumers of the game, a closer look at the language the Nitro-Fuel E3 presenter used reveals that he separated microtransactions for cosmetic items in the game from microtransactions for new playable content. While new playable content was guaranteed to be free, cosmetic items were not. Thus, the two companies adding microtransactions later on for cosmetic items are likely not liable to a false advertising claim.

All in all, while consumers may pursue false advertising claims against fraudulent publishers/developers, it may be economically unviable or difficult to pursue once an investigation of marketing language is done.

Is Your Personal Health Still Personal? Privacy Issues With Wearable Tech

By: Shelly Mittal

Who does not love the convenience of instant health data at their fingertips? However, like everything else, this convenience comes with a price. With so much insight into our daily steps, calories, sleep patterns, body fat, heart rate and more, the wearables have given a whole new meaning to our personal health. Wearable technology is any device worn on the body that is equipped with sensors to collect information from both the body and the surrounding environment. This ability to quantify our health has the potential to radically improve human health and fitness. Consequently, the wearable technology industry is projected to maintain double digit growth through 2024, which speaks to its acceptance among users. However, the security vulnerabilities in wearable health devices pose significant challenges to users’ data privacy.

While most engineers focus on extending battery life, creating rich functionality with minimal computational resources and minimizing design constraints, security of these devices often takes a backseat. These devices run the risk of physical unauthorized access of data as, often, there is no user authentication required (e.g., a PIN, password or biometric security). The less computational power of wearables causes the absence of some complicated security mechanisms on the device. Secondly, the wearable devices tend to connect to our smartphones or tablets wirelessly via Bluetooth, NFC or Wi-Fi. This need for communication creates another entry point into the device making it prone to information leakage. The lack of encryption, in some cases, makes data in transit insecure. Thirdly, many wearables run their own operating system and need to be patched and updated to avoid the latest security vulnerabilities.

These security vulnerabilities, when put together with the regulatory issues, paint a scary picture for data privacy. Regulatory framework for the wearable technology industry is in flux with hardly any application of the Food, Drug and Cosmetic Act (FD&C) or the Health Insurance Portability and Accountability Act (HIPAA). Although these wearables collect the most intimate health information, collection and use of this information is not governed under HIPAA because health data, such as number of steps, calories, and sleep history, is not formally considered Protected Health Information (PHI) unless collected by your doctor or insurance provider. Only the health care providers, health plans and health clearinghouses (referred to as covered entities under HIPAA) are subject to HIPAA’s extensive privacy regulations. Companies who make wearables and collect health data are not yet subject to HIPAA. So, for as long as the Department of Health and Human Services (the regulatory body under HIPAA) decides not to focus their attention on wearables, the privacy of its users is mostly dependent on the privacy policies they accept while setting up the device.

Businesses are free to draft their own privacy policies for controlling information and data that falls outside the scope of HIPAA. In January 2015, the Federal Trade Commission (FTC), which  has relatively more enforcement powers in the wearables industry, issued guidance on privacy and security protection that should be included with the Internet of Things (IoT), including wearables. It also required a disciplined and structured approach for design, development and management of these devices and the data they produce.

The privacy policies, unilaterally drafted by companies, are often vague and include a lot of “may(s)” to give flexibility to the companies. Ambiguous terms give them enough wiggle room to use the health data for their own good. Therefore, it is more important than ever to not skip the privacy policy page and give it a thorough read before accepting. It is imperative for users to know if their data is actually being encrypted; if the companies periodically review and monitor access to their data; and to know who owns their data and how they can get more control over it. Hence, the solution to present privacy concerns lies in using FTC’s Fair Information Practice Principles of notice, choice, and consent in this self-regulating space of wearables.

Convicting the Innocent: The Dangers of Facial Recognition Software

By: Alexander Coplan

Big Brother is always watching, but what if he mistakes you for someone else? The proliferation of facial recognition software throughout the country has provided law enforcement with a valuable tool. These softwares may help locate missing persons or identify the deceased. They can also positively identify people in criminal investigations. However, left largely unregulated, law enforcement’s use of these types of softwares has generated significant concern about its potentially devastating effect on American social life and privacy. Soon, these concerns will have to be addressed.

Background

Facial recognition technologies affect people of color disproportionately. For example, African Americans are more likely to be stopped by law enforcement and be subjected to facial recognition searches than individuals of other ethnicities. This creates major equity issues when facial recognition softwares struggle to accurately identify people of color.

In a study from 2018, researchers from MIT and Microsoft analyzed the efficiency of facial recognition programs from three prominent tech companies: Microsoft, IBM, and Face++. The study found an alarming amount of false positives for women and individuals with darker complexions, with error rates up to 34.7% for those who fall within both categories.

On the federal level, the FBI maintains an internal unit called Facial Analysis, Comparison and Evaluation (FACE) that uses facial recognition software. In developing its photo database, the FBI receives photos of American citizens provided by state agencies. For example, both the FBI and ICE use state Department of Motor Vehicle records to access millions of American’s photos without their consent. As of last year, 21 states cooperate with federal law enforcement agencies, allowing them to scan driver’s license photos. The figure below shows cooperation by state.

Source: FBI; Map Resources (map). | GAO-19-579T.

Federal agencies are invading our privacy by receiving American’s information without their consent. Meanwhile, even leading tech companies are struggling to develop unbiased algorithms for correctly matching suspects. These are just a few examples of the issues surrounding facial recognition programs.

Facial Recognition in Practice

Few courts have ruled on law enforcement’s use of facial recognition software, and this lack of guidance may have massive impacts on civil liberties. These issues were evident in Lynch v. Florida, a 2018 case in which Willie Allen Lynch was accused of selling $50 worth of crack cocaine, and was subsequently convicted and sentenced to 8 years in prison. In this case, Florida law enforcement took photos of a man selling drugs but were unable to identify him. To help identify the suspect, authorities turned to the FACE system, the same database used by the FBI, and later alleged it was Lynch it the photos.

Florida law enforcement agencies have a robust facial recognition system, and even helped advise the FBI when constructing the Bureau’s own programs. Despite their wealth of data, however, Florida’s reliance upon facial recognition software has raised issues. For example, in Lynch’s case, the software analyst did not know how the program measured a positive identification, admitting she did not know how the program worked. Nevertheless, the analyst forwarded Lynch’s photo—along with 4 others FACE produced as possible matches—to law enforcement. The State used this information, Lynch’s criminal history, and eyewitness testimony to convict him of drug distribution.

Furthermore, law enforcement was not required to disclose the fact that facial recognition software had been used in this case, and Lynch was not allowed to present the evidence at trial. In his defense, Lynch claimed he had been misidentified, and asserted the failure of the State to disclose the photos of other suspects constituted a Brady violation. In a motion for rehearing, Lynch’s public defender wrote, “[i]f any of the photographs of the other potential matches from the facial recognition program resembles the drug seller or Appellant then clearly there was a Brady/discovery violation and Appellant should be granted a new trial.” Following the Supreme Court decision in Brady v. Maryland, prosecutors are required to turn over all exculpatory evidence to the defense. Failure to do so constitutes a “Brady violation.”

Despite Lynch’s claim that this type of violation occurred, the court found otherwise, stating that the defense could not show that the other photos the database returned resembled him. The court also cited the defense’s reluctance to call the analyst who evaluated the photos, because Lynch’s attorney stated on record that the analyst’s testimony would only corroborate the officers’ testimony. As a result, Lynch’s motion was denied.

Without congressional or state guidance, law enforcement agencies are left to decide for themselves how and when to use facial recognition software. As in Lynch’s case, this lack of guidance can have a major impact on an individual’s ability to defend themselves.

Legislative Approach

Congress’ failure to implement national standards for government use of facial recognition software allows for uneven application. Fortunately, Washington’s state government addressed some of the issues surrounding these softwares by passing Senate Bill 6280, which regulates the government’s use of this kind of technology. Going into effect in July 2021, the bill requires Washington’s law enforcement agencies to report on their use of facial recognition technology and routinely test the software for fairness and accuracy. Additionally, addressing an issue in Lynch, the bill also requires the state to disclose its use of facial recognition software on a criminal defendant prior to trial.

Further, companies like Amazon have agreed to suspend selling facial recognition software for a year in hopes that Congress will provide federal regulation. Others have followed suit, with Microsoft refusing to sell its proprietary facial recognition program to police and IBM cancelling their program entirely.

Conclusion

Facial recognition software is widely used throughout the country, and one in two American adults are already in a law enforcement facial recognition network. States like Washington have taken appropriate steps to ensure the privacy of its citizens is secure, but the need for national standards is apparent. The time for congressional regulation is now. Without government action, every American’s equity, privacy, and liberty are at stake.