Alexa: Are You Going to Testify Against Me?

By: Melissa Torres

Life seems pretty great in a world where we can turn lights off, play music, and close the blinds by simply speaking it into existence. But, what happens when your conversations or home noises are used against you in a criminal investigation? 

Smart speakers, such as Google Home and Amazon Alexa, are marketed as great tech gifts and the perfect addition to any home. A smart speaker is a speaker that can be controlled with your voice using a “virtual assistant”. It can answer questions for you, perform various automated tasks and control other compatible smart devices by simply activating its “wake word.”

According to Amazon.com, in order for a device to start recording, the user has to awaken the device by saying the default word, “Alexa.” The website states, “You’ll always know when Alexa is recording and sending your request to Amazon’s secure cloud because a blue light indicator will appear or an audio tone will sound on your Echo device.” Unless the wake word is used, the device does not listen to any other part of your conversations as a result of built-in technology called “keyword spotting”, according to Amazon.

Similarly, Google states, “Google Assistant is designed to wait in standby mode until it detects an activation, like when it hears ‘Hey Google.’ The status indicator on your device will let you know when Google Assistant is activated. When in standby mode, it won’t send what you’re saying to Google servers or anyone else.” 

Consumers consent to being recorded when they willingly enter a contract with these smart devices by clicking “I agree to the terms and conditions.” However, most people assume this refers only when implicating the “wake word.” Despite assurances from tech giants that these devices do not record without being prompted, there have been many reports that suggest otherwise. And recent in years, these smart devices have garnered attention as they have been called as the star witness in murder investigations.  

In October 2022, someone fatally shot two researchers before setting fire to the apartment they were found in. According to the report, Kansas police believe the killer was inside the apartment with the duo for several hours, including before and after their deaths. Investigators found an Amazon Alexa device inside the apartment and filed a search warrant for access to the device’s cloud storage, hoping it may have recorded clues as to who is responsible for the murders. If the police obtain relevant information, they may be able to use it in court, depending on how this evidence is classified.

Under the Federal Rules of Evidence, all relevant evidence is admissible unless another rule specifies otherwise. Specifically, statements that are considered hearsay are not admissible unless an exception applies. Hearsay is any statement made outside the presence of court by a person for the purpose of offering it to prove the truth of the matter asserted. Although these devices technically do produce statements, courts have held that a statement is something uttered by a  person, not a machine. However, there is an important distinction between machines that have computer stored and computer generated data. Computer stored data that was entered by a human has the potential to be hearsay, while computer generated data without the assistance or input of a person is not considered hearsay.  The question of how these statements will be classified and whether they will be permitted in court is up to the judge. 

As such, this isn’t the first time police have requested data from a smart speaker during a murder investigation. In 2019, Florida police obtained search warrants for an Amazon Echo device believing it may have captured crucial information surrounding an alleged argument at a man’s home that ended in his girlfriend’s death. In 2017, a New Hampshire judge ordered Amazon to turn over two days of Amazon Echo recordings in a case where two women were murdered in their home. In these previous cases, the parties consented to handing over the data held on these devices without resistance. In 2015, however, Amazon pushed back when Arkansas authorities requested data over a case involving a dead man floating in a hot tub. Amazon explained that while it intends not to obstruct the investigation, it also seeks to protect its consumers First Amendment rights. 

According to the complaint, Amazon’s legal team wrote, “At the heart of that First Amendment protection is the right to browse and purchase expressive materials anonymously, without fear of government discovery,” later explaining that the protections for Amazon Alexa were twofold: “The responses may contain expressive material, such as a podcast, an audiobook, or music requested by the user. Second, the response itself constitutes Amazon’s First Amendment-protected speech.” Ultimately, the Arkansas court never decided on the issue as the implicated individual offered up the information himself.      

Thus, a question is still unanswered: Exactly how much privacy can we reasonably expect when installing a smart speaker? As previously mentioned, these smart speakers have been known to activate without the use of a “wake word”, potentially capturing damning conversations. Without a specified legal standard, there’s not much consumers can do to protect their private information from being shared as of now, fueling the worry that these devices can be used against them. Tech companies, like Amazon and Google, suggest going into the settings and turning off the microphone when you aren’t using it, but that requires trusting the company to actually honor those settings. Users also have the option to review and delete recordings, but again you have to trust the company to honor this. The only sure way to protect yourself from these devices is by simply not purchasing them. If you can’t bring yourself to do that, be sure to unplug the devices when you’re not using them. Otherwise, it’s possible these smart speakers may be used as evidence against you in court.

Copyright Law (Taylor’s Version)

By: Melissa Torres

Are you ready for it? Taylor Swift is reportedly set to kick off 2023 with the release of a new album, Speak Now (Taylor’s Version). Despite just releasing the fastest-selling album of 2022, Midnights, fans have been speculating about which one of her early albums she’ll rerecord next for quite awhile. Reports state, “Taylor has quietly been in the studio working on remaking both Speak Now and 1989. All details are still being ironed out but Speak Now (Taylor’s Version) should be out within the next couple of months, before she kicks off her Eras world tour.” 

But why is Taylor Swift rerecording old albums?  

While it may seem obvious to the general public that the writer, composer, and performer of a song would then own the recording of the song, the music industry functions on a different set of rules formed by contracts and copyrights. When a new artist signs with a record label, they form a contract which specifies the intellectual property rights of the works created. 

Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression. Common types of work include photographs, illustrations, books, and music. These works are fixed when they are captured in a “sufficiently permanent medium such that the work can be perceived, reproduced, or communicated for more than a short time.” U.S. copyright law provides copyright owners with a list of exclusive rights and also provides owners of copyright the right to authorize others to exercise these exclusive rights, subject to certain statutory limitations. 

Typically, in the music industry, copyrights are divided between the musical composition of a song and its sound recording. The musical composition refers to the lyrics of a song, the music itself, or both. The sound recording, also known as the master, is the recorded performance of the song. As a result, more often than not, an artist’s record label owns the master of a song.  

In Swift’s case, she signed with record label Big Machine Records in 2005 and formed a contract in which one of the stipulations was that Big Machine would own the rights to the sound records in perpetuity. After the deal ended in 2018, Swift moved on and signed to a different label. Her recordings made over the 13 years stayed with Big Machine, and the label sold the rights to them for $300 million to Scooter Bruan in 2019. Swift alleges she was never given the opportunity to purchase these rights. Despite writing and performing over 82 songs, she has no rights to those records and receives no payment anytime they are played. Therefore, the singer embarked on a mission to rerecord her first six records in order to own both the musical composition and master of the new recordings. 

Because Swift has written every single song released in those six albums and therefore owns the musical composition copyright, she retains the “sync rights” of her music. A synchronization license is needed for a song to be reproduced onto a television program, film, video, commercial, radio, or even a phone message. Permission from the owner of the master use license, typically the record company, also needs to be obtained if a specific recorded version of a composition is used for such a purpose. As a result, everytime these songs are used for commercial purposes, the owner of the masters earns a profit. 

By rerecording versions of her old hits, Swift will now hold the master and composition rights of these songs. To be clear, the original masters of these songs still exist, but by encouraging fans to stream the newer recorded version, Swift is able to reclaim any income that may have gone toward songs previously owned by her former label. 

What can we learn from Swift?

Swift’s case provides several important lessons to creators about the importance of intellectual property rights. Situations such as these, while not usually on the same scale, are relatively common in the entertainment industry. Prince, Kesha, and The Beatles are just some of the many artists who have fought for ownership rights of their music.  Artists need to be careful when entering contracts in order to protect their intellectual property rights. Intellectual property is valuable, and it is crucial artists recognize the significance of protecting their rights. Without intellectual property protection, artists would not be fully compensated for their creations. As a result, artists’ desire to produce new work would decline and cultural innovation would suffer. Moreover, creators should never rush to sign a contract before consulting a legal professional and fully understanding the future implications of each clause, as they can have enormous ramifications. The document that Swift signed in 2005 is still affecting not only her life, but the music industry today. Despite the legal hurdles Swift has dealt with, she is ultimately able to survive and profit off recreating her old music. Swift’s strong fan base has rallied behind her by promoting her rerecorded music and has helped her continue a career as one of the most successful female artists of the decade. 

Is AI Good in Moderation?

By: Chisup Kim

In 2016, Microsoft released Tay, a chatbot based on artificial intelligence on Twitter that became smarter as users interacted with it. Unfortunately, this experiment did not last long, as some Twitter users coordinated a barrage of inappropriate tweets towards Tay to force the chatbot to parrot out racist and sexist tweets. Tay tweeted racial slurs, support for gamergate, and incredibly offensive positions within a matter of hours of being online. Last week, Microsoft returned to the AI space by launching a new AI-powered Bing search engine in partnership with OpenAI, the developers of ChatGPT. Unlike Tay, the Bing Search AI is designed as a highly-powered assistant that summarizes relevant articles or provides related products (e.g., recommending an umbrella for sale with a rain forecast). While many news outlets and platforms are specifically focused on reporting on whether the Bing AI chatbot is sentient, the humanization of an AI-powered assistant creates new questions about the liability that could be created by the AI’s recommendations. 

Content moderation itself is not an easy task technically. While search engines are providing suggestions based on statistics, search engine engineers also run parallel algorithms to “detect adult or offensive content.” However, these rules may not cover more nefariously implicit searches. For example, a search engine likely would limit or ban explicit searches for child pornography. However, a user may type, for example, “children in swimsuits” to get around certain parameters, while simultaneously influencing the overall algorithm. While the influence may not be as direct or to the same extent as Tay on Twitter, AI machine learning algorithms incorporate user behavior into their future outputs that taint the search experience for the original intended audience. In this example, tainted search results influenced by the perverted could affect the results for a parent looking to buy an actual swimsuit for their child with photos depicting inappropriate poses. Around five years ago, Bing was criticized  for suggesting racist and provocative images of children that were likely influenced by the searches by a few nefarious users. Content moderation is not an issue that lives just with the algorithm or just with its users, but rather a complex relationship between both parties that the online platforms and their engineers must consider. 

Furthermore, the humanization of a recommendation service altering how third party content is provided may lead to further liability for the online platform. The University of Washington’s own Professor Eric Schnapper is involved in the Gonzalez v. Google case, which examines the question of whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when making algorithmically targeted recommendations of a third-party content provider. Section 230 currently immunizes most online platforms that are considered an “interactive computer service” from being a “publisher or speaker” of third-party information or content. The Gonzales plaintiff is challenging Google on the grounds that YouTube’s algorithmic recommendation system led some users to be recruited into ISIS, and ultimately led to the death of Nohemi Gonzalez in the 2015 terrorist attacks in Paris. After the first days of arguments, the Supreme Court Justices seemed concerned about “creating a world of lawsuits” by attaching liability to recommendation-based services. No matter the result of this lawsuit, the interactive nature of search engine based assistants creates more of a relationship between the user and the search engine. Assessing how content is being provided has been seen in other administrative and legislative contexts such as the SEC researching the gamification of stock trading in 2021 and California restricting the types of content designs on websites intended for children. If Google’s AI LaMDA could pass the famous Turing Test to appear to have sentience (even if it technically does not), would the corresponding tech company be more responsible for the results from a seemingly sentient service or would it create more responsibility on the user’s responses? 

From my perspective, I think it depends on the role that the search engines give their AI-powered assistants. As long as these assistants are just answering questions and providing pertinent and related recommendations without taking demonstrative steps of guiding the conversation, then search engines’ suggestions  may still be protected as harmless recommendations. However, engineers need to continue to be vigilant on how user interaction in the macroenvironment may influence AI and its underlying algorithm, as seen with Microsoft’s Twitter-chatbot Tay or with  some of Bing’s controversial suggestions. The queries sent with covert nefariousness should be closely monitored as to not influence the experience of the general user. AI can be an incredible tool, but online search platforms should be cognizant of the rising issues of how to properly moderate content and how to display content to its users. 

Litigation Funding in IP: Caveat Emptor

By: Nicholas Lipperd

Gambling seems to be an American tradition, from prop bets on the Super Bowl to riding the stocks through bear and bull markets. The highest stake gambling is done by investment firms, some of whom are finding profitable bets to be had on civil court cases. The process of Third-Party Litigation Funding (“TPLF”) is simple enough on its face: a funder pays a non-recourse sum either to the client directly or to the law firm representing the client. In exchange, subject to the agreed upon terms, the funder receives a portion of any damages awarded. Thus, TPLF is no more than a third-party placing a bet on the client’s case, somewhat similar to the choice a law firm makes when taking a case on contingency. With $2.8 billion invested in the practice in 2021, TPLF seems to be a betting scheme that is paying off.

TPLF is expanding from business litigation into patent litigation. Since the creation of the Federal Circuit, damages in patent infringement cases have skyrocketed, increasing attention to patent cases. TPLF is no exception. The emergence of third-party funding in patent litigation could allow individuals to assert patent rights who previously could not have afforded it. Unlike agreements directly with law firms, third-party funding is not controlled by the American Bar Association’s (“ABA”) Rules of Professional Conduct for lawyers. While this creates some concern in any TPLF case, this lack of protection in patent cases is unique: the funder can obtain the rights of the patent(s) from their clients. 

As previously mentioned, a barrier many patent owners face when attempting to assert their rights is the cost of litigation. While patent litigation cases rarely proceed to a bench trial, these cases typically take three to five years to complete. Intrinsically linked to this timeline is the price tag. Infringement cases where over $25 million in damages is at risk may run a median of $4 million in litigation costs. For cases with less than $1 million at risk, the median litigation cost sits at just under $1 million. A simple look at the risk versus rewards tradeoffs in this case paints a discouraging picture for the plaintiff. Regardless of the expected damages, the cost of litigation is an undeniably large factor, and one that leads to many cases being settled within a year rather than being tried on their merits. 

When the client cannot afford to pay the price of litigation yet intends to assert the patent rights, TPLF creates an opportunity to pursue the case. Plaintiff-side litigation seems like a simple win-win for the client. Yet as with anything, the devil is in the details. A funder is naturally motivated to see a return on this investment, so a client looking at the deal must look past the surface. Not all funding is arranged on a non-recourse model, leaving clients no better off should cases become losers than if they had chosen a billable hour payment structure with the firm. TPLF often does not cover attorney’s fees awards, so clients may be on the hook for more than they realize. Litigation funding arrangements often come with “off-ramps” for the investor should the case take an unexpected turn or the funder stops believing in the merits of the case. This means the funder may be able stop funding the client at certain stages of litigation, leaving the client or the firm without funding for the remaining stages. 

TPLF has often been described as the “wild west” of funding cases. In part, this is due to the lack of regulations surrounding the practice. It is also due to the fact that third-party funders are not constrained by the ABA’s Rules of Professional Conduct like attorneys are. 

Attorneys may not abdicate their Rule 1.1 duty of competency, yet TPLF creates tension with this duty. A third-party funder has made an investment in a case. Like any diligent investor, the funder likely wants to track the case closely to ensure it continues to align with the funder’s financial interest. Should the funder attempt to exert control over the case to protect this interest, it is up to the lawyer to resist. This may leave the client in trouble; should the disagreement be large enough, the client could see their funding removed as the TPLF exercises a contractual “off ramp.”

Perhaps the most implicated ABA rule in TPLF structures is Rule 1.6, the Duty of Confidentiality. This rule, and the related Federal Rule of Evidence 502 regarding attorney-client privilege, create friction with TPLF arrangements. Funders want as much information as possible before and during the funding of a case, as they want to be able to gauge the strength of their future and current investments. While all disclosures must be clearly sanctioned by the client, when do these disclosures waive attorney-client privilege? The majority of lower courts hold that communications about the case with a TPLF fall under work-product privilege protection, a question the Supreme Court has not answered and one the Federal Circuit danced around. (See In re Nimitz Techs. LLC, (Fed. Cir. 2022) denying a writ of mandamus to protect the District Court from reviewing litigation funding materials in camera.)

While ABA Rule 1.5 prevents firms from charging unreasonable fees, including contingency rates, it does not prevent a TPLF from doing so. A funder can bargain with a client for whatever portion of the damages award they like, and they have much more bargaining power than the client does. This bargaining power may be used for more than simply leveraging a larger percentage contingency fee than a firm could charge.

While damages in patent cases are high enough to attract TPLF, it is more than money that may attract third-party funders to patent litigation. Often, the patent itself is just as valuable as the damages award. Rule 1.8(i) prevents attorneys from acquiring proprietary interest in the cause of action or subject matter of litigation. Again, this does not apply to third-party funders, who are free to include any and all patent rights in the contract to fund a client’s case. 

Patent owners may face a catch-22 style choice in this scenario. If they cannot afford to stake the cost of litigation and if a firm does not see value in taking it on contingency, owners may turn towards TPLF. Yet, if funders see more value in the patents than the potential litigation damages, patent owners must make a hard choice. Obtaining funding will give a patent owner a one-time shot at a large damages award. The owner, win or lose, will not be the owner of the patent any longer, and will lose any potential revenue stream it would produce. Yet if owners cannot assert their patent rights in court, what value do their patents hold? Patent owners have promoted the progress of science and useful arts in the United States, and in exchange they receive limited monopolies in the form of their patents. These monopolies are not so easily maintained and defended though. Patent owners must carefully consider their options when looking to assert their rights. Patent litigation is expensive and lengthy, and while having TPLF cover all litigation costs may seem like a sterling option, owners must dig deeper to fully understand the trade-offs. Could they lose funding support halfway through litigation? Would funding the case be worth giving up their patent rights? TPLF is still newly emerging in patent litigation cases, but for potential clients, the message is already clear: in deciding to take on a third-party funder, let the buyer beware.

“Hey Chatbot, Who Owns your Words?”: A look into ChatGPT and Issues of Authorship

By: Zachary Finn

Unless you have lived under a rock, since last December, our world has been popularized by the infamous ChatGPT. Generative Pre-trained Transformer (“ChatGPT”) is an AI powered chatbot which uses adaptive human-like responses to answer questions, converse, write stories, and engage with input transmitted by its user. Chatbots are becoming increasingly popular in many industries and can be found on the web, social media platforms, messaging apps, and other digital services. The world of artificial intelligence sits on the precipice of innovation and exponential technological discovery. Because of this, the law has lagged to catch up and interpret critical issues that have emerged from chatbots like ChatGPT. One issue that has risen within the intersection of AI-Chatbot technology and law is that of copyright and intellectual property over a chatbot’s generated work. The only thing that may be predictable about the copyright of an AI’s work is that (sadly) ChatGPT likely does not own its labor. 

To first understand how ChatGPT figures into the realm of copyright and intellectual property, it is important to understand the foundations and algorithms that give chatbot machines’ life. A chatbot is an artificial intelligence program designed to simulate conversation with human users. OpenAI developed ChatGPT to converse with users, typically through text or voice-based interactions. Chatbots are used in a variety of ways, such as: user services, conversation, information gathering, and language learning. ChatGPT is programmed to understand user contributions and respond with appropriate and relevant information. These inputs are sent by human users, and a chatbot’s response is often based on machine learning algorithms or on a predefined script. Machine learning algorithms are methods by which an AI system functions, generally predicting output values from given input data. In lay terms, a system will learn from previous human inputs to generate a more accurate response. 

The ChatGPT process goes as followed:

1. A human individual inputs data, such as a question or statement: “What were George Washington’s teeth made of?”

2. The Chatbot reads the data and uses machine learning, algorithms, and its powerful processor to generate a response.

3. ChatGPT’s response is relayed back to the user in a discussion-like manner: “Contrary to popular belief, Washington’s dentures were not made of wood, but rather a combination of materials that were common for dentures at the time, including human and animal teeth, ivory, and metal springs. Some of Washington’s dentures also reportedly included teeth from his own slaves” (This response was generated by my personal inquiry with ChatGPT).

So, who ultimately owns content produced by ChatGPT and other AI platforms? Is it the human user? OpenAI or the system developers? Or, does artificial intelligence have its own property rights?

Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression. This is codified in The Copyright Act of 1976, which provides the framework for copyright law. Speaking on the element of authorship, anyone who creates an original fixed work, like taking a photograph, writing a blog, or even creating software, becomes the author and owner of that work. Corporations and other people besides a work’s creator can also be owners, through co-ownership or when a work is made for hire (which authorizes works created by an employee within the scope of employment to be owned by the employer). Ownership can also be contracted.

In a recent Ninth Circuit Court decision, the appellate court held that for a work to be protected by copyright, it must be the product of creative authorship by a human author. In the case of Naruto v Slater, where a monkey ran off with an individual’s camera and took a plethora of selfies, it was concluded that the monkey did not have protections over the selfies because copyright does not extend to animals or nonhumans. §313.2 of the Copyright Act states that the U.S. Copyright Office  will not register works produced by nature, animals, the divine, the supernatural, etc. In the case of AI, a court would likely apply this rule and similar as well as any precedent cases that have dealt with similar fact patterns with computer generated outputs.

Absent human authorship, a work is not entitled to copyright protection. Therefore, AI-created work, like the labor manufactured by ChatGPT will plausibly be considered works of public domain upon creation. If not this, it is likely they will be seen as a derivative work of the information in which the AI based its creation. A derivative work is “a work based on or derived from one or more already existing works”. This fashions a new issue as to whether the materials used by an AI are derived from algorithms created by companies like OpenAI, or by users who influence a bot’s generated response, like when someone investigates George Washington’s teeth. Luckily for OpenAI, the company acknowledges via its terms and agreements that it has ownership over content produced by the ChatGPT.

However, without a contract to waive authorship rights, the law has yet to address intellectual property rights of works produced by chatbots. One wonders when an issue like this will present itself to a court for systemization into law, and if when that time comes, will AI chatbots have the conversational skills and intellect to argue for ownership of their words?