(A.I.) Drake, The Weeknd, and the Future of Music

By: Melissa Torres

A new song titled “Heart on My Sleeve” went viral this month before being taken down by streaming services. The song racked up 600,000 Spotify streams, 275,000 YouTube views, and 15 million TikTok views in the two weeks it was available. 

Created by an anonymous TikTok user, @ghostwriter977, the song uses generative AI to mimic the voices of Drake and The Weeknd. The song also featured a signature tagline from music producer Metro Boomin. 

Generative AI is a technology that is gaining popularity because of its ability to generate realistic images, audio and text. However, concerns have been raised about its potential negative implications, particularly in the music industry, because of its impact on artists. 

Universal Music Group (UMG) caught wind of the song and had the original version removed from platforms due to copyright infringement. 

UMG, the label representing these artists, claims that the Metro Boomin producer tag at the beginning of the song is an unauthorized sample. YouTube spokesperson Jack Malon says, “We removed the video after receiving a valid copyright notification for a sample included in the video. Whether or not the video was generated using artificial intelligence does not impact our legal responsibility to provide a pathway for rights holders to remove content that allegedly infringes their copyrighted expression.”

While UMG was able to remove the song based on an unauthorized sample of the producer tagline, it still leaves the legal question surrounding the use of voices generated by AI unanswered. 

In “Heart on My Sleeve”, it is unclear exactly which elements of the song were created by the TikTok user. While the lyrics, instrumental beat, and melody may have been created by the individual, the vocals were created by AI. This creates a legal issue as the vocals sound like they’re from Drake and The Weeknd, but are not actually a direct copy of anything. 

These issues may be addressed by the courts for the first time, as initial lawsuits involving these technologies have been filed. In January, Andersen et. al. filed a class-action lawsuit raising copyright infringement claims. In the complaint, they assert that the defendants directly infringed the plaintiffs’ copyrights by using the plaintiffs’ works to train the models and by creating unauthorized derivative works and reproductions of the plaintiffs’ work in connection with the images generated using these tools.

While music labels argue that a license is required because the AI’s output is based on preexisting musical works, proponents for AI maintain that using such data falls under the fair use exception in copyright law. Under the four factors of fair use, advocates for AI claim the resulting works are transformative, meaning they do not create substantially similar works and have no impact on the market for the original musical work.

As of now, there are no regulations regarding what training data AI can and cannot use. Last March, the US Copyright Office released new guidance on how to register literary, musical, and artistic works made with AI. The new guidance states that copyright will be determined on a case-by-case basis based on how the AI tool operates and how it was used to create the final piece or work. 

In further attempts to protect artists, UMG urged all streaming services to block access from AI services that might be using the music on their platforms to train their algorithms. UMG claims that “the training of generative AI using our artists’ music…represents both a breach of our agreements and a violation of copyright law… as well as the availability of infringing content created with generative AI on DSPs…” 

Moreover, the Entertainment Industry Coalition announced the Human Artistry Campaign, in hopes to ensure AI technologies are developed and used in ways that support, rather than replace, human culture and artistry. Along with the campaign, the group outlined principles advocating AI best practices, emphasizing respect for artists, their work, and their personas; transparency; and adherence to existing law including copyright and intellectual property. 

Regardless, numerous AI-generated covers have gone viral on social media including Beyoncé’s “Cuff It” featuring Rihanna’s vocals and the Plain White T’s’ “Hey There Delilah” featuring Kanye West’s vocals. More recently, the musician Grimes recently shared her support toward AI-generated music, tweeting that she would split 50% royalties on any successful AI-generated song that uses her voice. “Feel free to use my voice without penalty,” she tweeted, “I think it’s cool to be fused [with] a machine and I like the idea of open sourcing all art and killing copyright.”

As UMG states, it “begs the question as to which side of history all stakeholders in the music ecosystem want to be on: the side of artists, fans and human creative expression, or on the side of deep fakes, fraud and denying artists their due compensation.”

While the music industry and lawyers scramble to address concerns presented by generative AI, it is clear that “this is just the beginning” as @ghostwriter977 ominously noted under the original TikTok posting of the song. 

Is AI Good in Moderation?

By: Chisup Kim

In 2016, Microsoft released Tay, a chatbot based on artificial intelligence on Twitter that became smarter as users interacted with it. Unfortunately, this experiment did not last long, as some Twitter users coordinated a barrage of inappropriate tweets towards Tay to force the chatbot to parrot out racist and sexist tweets. Tay tweeted racial slurs, support for gamergate, and incredibly offensive positions within a matter of hours of being online. Last week, Microsoft returned to the AI space by launching a new AI-powered Bing search engine in partnership with OpenAI, the developers of ChatGPT. Unlike Tay, the Bing Search AI is designed as a highly-powered assistant that summarizes relevant articles or provides related products (e.g., recommending an umbrella for sale with a rain forecast). While many news outlets and platforms are specifically focused on reporting on whether the Bing AI chatbot is sentient, the humanization of an AI-powered assistant creates new questions about the liability that could be created by the AI’s recommendations. 

Content moderation itself is not an easy task technically. While search engines are providing suggestions based on statistics, search engine engineers also run parallel algorithms to “detect adult or offensive content.” However, these rules may not cover more nefariously implicit searches. For example, a search engine likely would limit or ban explicit searches for child pornography. However, a user may type, for example, “children in swimsuits” to get around certain parameters, while simultaneously influencing the overall algorithm. While the influence may not be as direct or to the same extent as Tay on Twitter, AI machine learning algorithms incorporate user behavior into their future outputs that taint the search experience for the original intended audience. In this example, tainted search results influenced by the perverted could affect the results for a parent looking to buy an actual swimsuit for their child with photos depicting inappropriate poses. Around five years ago, Bing was criticized  for suggesting racist and provocative images of children that were likely influenced by the searches by a few nefarious users. Content moderation is not an issue that lives just with the algorithm or just with its users, but rather a complex relationship between both parties that the online platforms and their engineers must consider. 

Furthermore, the humanization of a recommendation service altering how third party content is provided may lead to further liability for the online platform. The University of Washington’s own Professor Eric Schnapper is involved in the Gonzalez v. Google case, which examines the question of whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when making algorithmically targeted recommendations of a third-party content provider. Section 230 currently immunizes most online platforms that are considered an “interactive computer service” from being a “publisher or speaker” of third-party information or content. The Gonzales plaintiff is challenging Google on the grounds that YouTube’s algorithmic recommendation system led some users to be recruited into ISIS, and ultimately led to the death of Nohemi Gonzalez in the 2015 terrorist attacks in Paris. After the first days of arguments, the Supreme Court Justices seemed concerned about “creating a world of lawsuits” by attaching liability to recommendation-based services. No matter the result of this lawsuit, the interactive nature of search engine based assistants creates more of a relationship between the user and the search engine. Assessing how content is being provided has been seen in other administrative and legislative contexts such as the SEC researching the gamification of stock trading in 2021 and California restricting the types of content designs on websites intended for children. If Google’s AI LaMDA could pass the famous Turing Test to appear to have sentience (even if it technically does not), would the corresponding tech company be more responsible for the results from a seemingly sentient service or would it create more responsibility on the user’s responses? 

From my perspective, I think it depends on the role that the search engines give their AI-powered assistants. As long as these assistants are just answering questions and providing pertinent and related recommendations without taking demonstrative steps of guiding the conversation, then search engines’ suggestions  may still be protected as harmless recommendations. However, engineers need to continue to be vigilant on how user interaction in the macroenvironment may influence AI and its underlying algorithm, as seen with Microsoft’s Twitter-chatbot Tay or with  some of Bing’s controversial suggestions. The queries sent with covert nefariousness should be closely monitored as to not influence the experience of the general user. AI can be an incredible tool, but online search platforms should be cognizant of the rising issues of how to properly moderate content and how to display content to its users.