Zoom Voir Dire: A Technological Gap is Not Going to Solve the Lack of Diversity in Jury Pools

By: Talia Cabrera

Voir dire is the process of ensuring the “jury of your peers” is a representation of your community. Juries are made up of ordinary citizens and play an important role in the criminal justice system. Jurors, who are given the responsibility to decide a case’s verdict, enter the complexities of the courtroom with their own experiences and biases. Notably, jurors are often shouldered with the responsibility of making a decision that will impact how an individual’s life will play out. However, justice is often compromised due to the discrimination that occurs in selecting a jury. Juries that are not representative of the community at large will not only affect marginalized communities of color, but will also lead to higher rates of wrongful convictions and, ultimately, a system far from just.

Unrepresentative juries disproportionately affect communities of color. This discrepancy is apparent first in the unrepresentative jury pools from which jurors are selected and is then reinforced through current and historical use of peremptory strikes to remove people of color from juries. Apart from the institutionalized racism engraved in our history, there are a variety of factors that predominantly affect communities of color and prevent them from serving on juries. According to a report from the Equal Justice Initiative, these factors include: the inability to request time off for work; the financial burden of participation, which includes the courts not paying jurors enough money to participate and the difficulty of obtaining family care; and the lack of transportation for people to report to the courts. Ultimately, these barriers to jury service, which deeply affect the makeup of juries across the nation, need to be reformed to ensure the court system is fair. 

Discrimination in jury selection is a problem many courts have recognized and are striving to change. In 2018, the Washington Supreme Court adopted General Rule 37, which sought to eliminate the unfair exclusion of potential jurors based on race or ethnicity. According to the text of the rule, “the court shall then evaluate the reasons given to justify the peremptory challenge considering the totality of circumstances. If the court determines that an objective observer could view race or ethnicity as a factor in the use of the peremptory challenge, then the peremptory challenge shall be denied.” Given the adoption of this rule, it is clear that Washington has taken initiative to reduce the discrimination we see in a courtroom in order to strive for justice. However, even courts like Washington, who have taken some steps in the right direction, need to take a step further. In order to have more inclusive juries, courts must create opportunities for more diverse communities to participate in jury selection. While it is clear that a state like Washington is trying to reduce unfair exclusion of potential jurors, peremptory challenges are just one part of a bigger issue that needs to be reformed. Courts need to take a step back and see how they can create a comprehensive solution that will bridge the gaps of accessibility that are currently present in jury selection. 

For example, the COVID-19 pandemic forced many sectors to transition to a “work-from-home” format in order to preserve workers’ health and the health of others around them. As of today, many companies continue to employ workers who work remotely, as many realized workers could continue to be efficient from the comfort of their home. While remote work was initially a temporary option for the pandemic, Washington state has seen more participation in remote jury selection and continues to use Zoom for that purpose. Whether Zoom or courthouse, not much has changed in the way jury selection occurs in King County, e.g., jurors still get notified in the mail if they have jury duty. But now, jurors no longer need to attend in person. In King County, jurors are still required to set time aside to attend court but now, so long as they have an electronic device, jury duty can be completed anywhere.

Through the incorporation of Zoom in court proceedings, jurors no longer need to spend time on tasks such as figuring out how to get to the King County Superior Courthouse downtown. Jury members are now able to eat lunch at home, avoid paying expensive parking, and still appear for 90 minutes in the comfort of their living rooms. Increased participation should help create a better reflection of the community in the jury of our peers. However, there are still many issues left unresolved. Even though Zoom voir dire may help with accessibility, such benefits are only available to those who have the privilege of possessing technology. Technology may not be available for everyone. It is possible that a juror may not have the appropriate resources to withstand hours of jury selection. For example, some jurors may not even have Wi-Fi. In addition to the technological divide that zoom voir dire creates, many of the same factors we have seen in the past, like the financial burden of taking a day off work, continues to be a prominent issue for people participating in jury selection. Although participation may have increased with zoom voir dire, it may have only done so for those who have the privilege of accessible technology.

Maybe there is a way for technology to help eliminate the risk of unrepresented juries in our court system. It is possible that new laws, policies, or even court rules like General Rule 37, will need to be created to help alleviate the factors that prevent jurors from participating. If possible, courts should provide individuals without access with loaned technology. For jury participants who do not have access to Wi-Fi, the courts should provide usable locations with Wi-Fi or temporary Wi-Fi vouchers. Currently, there is not a lot of faith in the criminal justice system in part because of the disparity in makeup of the broader community and that of jury pools. Efforts must be made to dismantle discrimination and create a fair and just court system. If not, we will continue to see the reinforcement of systemic racism throughout our criminal justice system.

Into the Dungeon–A Comparative Look at the Original and 2023 Open Gaming Licenses

By: Perry Maybrown

It all started with a leak, which led to a draft, before ending in a retraction.

Wizards of the Coast (WotC) rolled a critical failure when trying to modify their Open Gaming License (OGL)—a license that allows other creators to make use of some Dungeons & Dragons content as building blocks for their own games—after a draft of the updated license was leaked to news outlet Gizmodo. While WotC insisted that little would change, the new license seemed to say otherwise.

The community revolted, leading to promises of boycotts, mass cancelations of subscriptions to D&D Beyond, and a new license called Open RPG Creative License (ORC) from rival company Paizo. Faced with this onslaught, the gaming company chose to back down and keep the OGL intact.

The OG OGL

The original OGL (1.0a), published in 2000, offered prospective gamers a perpetual license to “copy, modify and distribute” the open game content making up the Systems Reference Document (SRD). While the SRD changes with each new edition of D&D (excluding the 4th edition, which is a completely separate can of worms), the OGL stays the same and is perpetual, meaning the license has no set expiration date. The mechanics and building blocks for a Table-Top Role Playing Game (or TTRPG) make up the bulk of the SRD, which creates a base from which creators can build their own games. You can’t use the OGL to publish works that use WotC’s trademarks, like the famous dragon ampersand.  

1.0a includes several caveats that creators must follow to not confuse anyone about what is and what isn’t open game content. For one, a complete copy of the OGL must be included with every copy of open game content distributed. To avoid confusion, creators must also label what is open game content. Content can be directly from the SRD, open game content from other game makers, or original works that the creator wishes to add to the proverbial open game content pile. 

The license is far from perfect, however. Most notable is the lack of the  terms “revocable” or “irrevocable” in its text. This omission makes it difficult to know if WotC can terminate the OGL. Only further muddying the waters is section IX of the license. Through this clause, WotC retains the authority to update the license and allows creators to apply any authorized version of the OGL to any open game content distributed under any license version.  

WotC may argue that they can update the OGL and include in the new version language that declares the old to be unauthorized and thus void. However, because the OGL is a long-standing open license, there are legal arguments and evidence that may contradict WotC’s statement and prevent them from deauthorizing 1.0a. Many online have weighed in on the issue, even some legal authorities, with varying conclusions. For now, it’s challenging to say what way a court may lean, but even in that uncertainty, WotC pushed forward with the plan. 

The Leaked Draft

On January 5th, 2023, a draft of the new OGL 1.1 was leaked, and it was a radical departure from 1.0a. The license now limited the OGL to the “creation of roleplaying games and supplements in printed media and static electronic file formats.” Meaning creators could no longer create other media such as video games, videos, plays, or otherwise use open gaming content. There was a misunderstanding because one of the sections seemingly implied WotC would own any creations made under the OGL; however, that reading was likely incorrect. While section III does state that WotC owns both the licensed and unlicensed content, as defined in the OGL section I(A), neither of those categories include content made by the licensee. Licensed content refers to content within the SRD, and unlicensed is content not within the SRD. However, under section X(B), creators would grant WotC “a nonexclusive, perpetual, irrevocable, worldwide, sub-licensable, royalty-free license to use that content for any purpose.” So while creators would still own their content, WotC would still be allowed to use it. 

Some provisions did remain the same between 1.0a, and 1.1. For example, publishers would still be required to include the license with distributed works and identify anything considered “licensed content.” Some sections were expanded in the new draft, like the termination clause, which now allowed termination for various causes. In addition to these expanded terms, further requirements were also tacked on to the license. Such as a clause detailing the repercussions of terminating the license and an indemnity clause that would shift the financial burden to the licensee in several instances if WotC faced legal action due to the license’s contract. While these modifications were likely made to shore up 1.1 legally, the words “revocable” or “irrevocable” were still not in the new license.

The most significant change in 1.1 was that it had been split into two parts, commercial and non-commercial. Commercial had additional monetary requirements regarding royalties and registration. If someone wished to create content to sell, they were required to register and provide WotC with extensive information about the product and creator, reporting any revenue of more than $50,000. Royalties to WotC were only required once a creator had made more than $750,000 in revenue per year across all products produced under the OGL. Creators would have to send 25% of any qualifying revenue exceeding $750,000. Separate terms and royalty rates were detailed for Kickstarter-backed projects. 

The Updated Draft

Incensed by this update, fans pushed back, leading WotC to respond with a new draft, 1.2. The license was no longer split in two and did not require creators to pay royalties to WotC. Core D&D mechanics were now licensed under the creative commons license 4.0 CC BY. Rather than requiring the full license, creators could now either include the license or display the newly designed OGL product badge on their work. 

Creators were also no longer required to grant WotC a license to use works created under the OGL. Even a new provision under section 3 allowed creators to WotC for copying works (though it does have quite a few restrictions). There was no longer an indemnity clause, though the license bar users from participating in class actions against WotC for activities regarding the OGL. To avoid further conflict, 1.2 finally incorporated the magic words. “This license is perpetual (meaning that it has no set end date), non-exclusive (meaning that we may offer others a license to Our Licensed Content or Our Unlicensed Content under any conditions we choose), and irrevocable (meaning that content licensed under this license can never be withdrawn from the license). It also cannot be modified except for the attribution provisions of Section 5 and Section 9(a) regarding notices.” 

In The End

While 1.2 was created to appease the masses, the die had already been cast, and fans were not ready to accept what seemed to be just a modern rewording of 1.0a. WotC eventually backed down, deciding it was not worth the hassle to update the OGL. It is unclear in the future if any new content will be included from the next generation of D&D or if the OGL will stay as it is, only covering the three SRDs, and other open gaming content created for it. The future of these available licenses is unclear, but at least 1.0a is safe from change for now.

Disclaimer: I worked at Wizards of the Coast from 2019-2020. None of the information discussed in the above article is confidential, or provided directly to me by Wizards of the Coast or any of its agents during or after my year of employment. All documents and sources referenced are in the public domain. 

Are 3D printed human organs a possibility in the near future?

By: Aminat Sanusi

Medically 3D printed human organs have the possibility to save many lives. The United Network for Organ Sharing controls the American transplant system and lists patients in need of an organ transplant. Procedures such as kidney and liver transplants are possible with living donors. But patients on the list for transplants of the heart and lungs are not so lucky. Imagine the infinite possibilities of being able to print a human organ to save a life, instead of waiting until someone died to use theirs? With constant innovation in medicine and the legal field trying to keep up, maybe in this decade or the next, medical trials of 3D printed organs will be a success.

In 2020, the average kidney transplant cost $442,500 and 3D printers cost up to $100,000. The expensive costs of organ transplant surgery come from the transport costs and the actual surgery of implanting the organ. Affordability and insurance coverage issues may arise from time to time but nothing extremely unusual from a normal organ transplant. Nevertheless, accessibility wouldn’t be a huge issue because the organ is created with the patient’s own cells versus a living or non-living organ donor.

What are the current regulations of 3D printed medical devices?

Medical 3D printing has already enhanced treatment for certain medical conditions such as joint replacements and prosthetic limbs. The Food and Drug Administration (FDA) is currently in charge of the regulation of products made and used in the medical field by a 3D printer. The FDA regulates 3D medical devices by categorizing them into groups based on their levels of risk. Regulatory control increases from Class I to Class III, with Class I devices posing the lowest risk to patients. Some requirements apply to the medical devices before they are marketed (premarket requirements), and others apply to the medical devices after they are marketed (postmarket requirements). 

The FDA also regulates the information and application process that the 3D printed medical device seeking acceptance should include. In 2016, the FDA issued a draft guidance to assist manufacturers who are producing medical devices through 3D printing with design, manufacturing, and testing considerations. The guidance categorizes two major topic areas: design and manufacturing considerations which addresses the quality sy draft guidance tstem of the device, and device testing considerations which addresses the type of information that should be included in premarket notification submissions. The FDA continues to evaluate submissions of new 3D printed medical devices to determine its safety and effectiveness.

How are 3D printed organs made?

The possibility of printing 3D human organs is in the near future with organ bioprinting. According to a 2019 medical study, organ bioprinting is the use of 3D printing technologies to assemble multiple cell types, growth factors and biomaterial in a layer-by-layer fashion to produce bioartificial organs that ideally imitate their natural counterparts. The ability to recreate organs with the patient’s own cells is key to avoiding the risk of the patient rejecting the organ or dying before they could be matched with a healthy organ.

Dr. Anthony Atala, the director of the Wake Forest Institute for Regenerative Medicine, and Dr. Jennifer Lewis, a professor at Harvard University’s Wyss Institute for Biologically Inspired Engineering, discuss and explain the process of bioprinting. To begin the process of bioprinting an organ, the doctors need the patient’s cells, so they either choose to do a biopsy of an organ or surgically remove a piece of tissue from the patient’s body. Now the cells need to grow outside of the body, so it’s placed into an incubator that way it’s constantly fed nutrients. Next the cells are mixed with a gel which is similar to glue to create a printable mixture of living cells. Typically the gel is made out of collagen or gelatin. 

For the printing process, the 3D printer is programmed with the patient’s imaging data from X-rays or scans and then loaded with the bioink, which is the gel mixed with the patient’s cells, into the printing chamber to print the organ. Much similar to a regular printer that has cartridges filled with different colored ink, the 3D printer fills up its cartridges with cells. The printing process could take hours to weeks depending on the type of organ that is being printed.

As technological innovation becomes more successful and precise, 3D-printed organ transplants will likely become reality. However, there are current challenges involved with 3D bioprinted organ transplants. The first issue is the functioning of the 3D bioprinted organ is still undergoing testing and trials. The second issue is the uncertainty of how FDA regulations will control the manufacturing and testing of the 3D bioprinted organs. Lastly, the accessibility and affordability of the 3D printed organs is currently limited. 

3D bioprinted organs are created to be complex like a human organ and there are still many challenges to overcome with getting the printed organ to properly function alongside the other human organs in the body. It is still unclear how FDA regulations will be able to control the usage and safety of the product versus the manufacturing and engineering of the product. While there are already procedures in place for 3D printed medical devices like prosthetic limbs which could potentially be applied to bioprinted organs, the regulation of device testing may change because of the use of human cells to print the organs. 

So what comes next?

3D printed medical devices already exist. But why stop there? Why not 3D print human organs? In the award-winning American medical drama television series Grey’s Anatomy, the surgeon 3D printed a part of a human heart and surgically implanted it into the patient. Although the idea of it seems plausible on TV, the reality is a 3D printed human organ has yet to be implanted into a human body. However, that does not mean that 3D printing has not been utilized in the medical field.

How FTC’s Proposed Rule Could Eliminate NFL’s Exclusive Franchise Tags

By: Annalyse Harris

FTC’s Proposed Rule 

In January 2023, the United States Federal Trade Commission (“FTC”) released a proposed rule that, if enacted into law, would ban companies from the use of non-compete clauses in employment agreements. Additionally, the rule will require companies to fully rescind all non-competes with current and former employees. 

The rule defines “non-compete clause” as a “contractual term between an employer and a worker that prevents the worker from seeking or accepting employment with a person, or operating a business, after the conclusion of the worker’s employment with the employer.” 

Importantly, the rule clarifies that whether or not a contractual term is considered a “non-compete clause” does not depend on its express terms, but rather on how the term functions. Therefore, if a contractual term has the effect of preventing a worker from seeking or accepting employment subsequent to the worker’s employment with the employer, it will be prohibited. Jackson Lewis, one of the nation’s most prominent labor and employment law firms, effectively labels such term as a “de facto” non-compete

NFL Franchise Tags 

The National Football League (“NFL”) gives each team the right to “franchise tag” one player a year. This means a team can restrict an otherwise unrestricted free agent–a player with at least four seasons accrued whose contract has expired and is free to negotiate and sign with any team—for a year longer than his contract. 

Teams can choose between a non-exclusive and exclusive franchise tag. The former allows a player to further negotiate with other teams, while the latter does not. Additionally, players generally have little to no control over the tagging, and as a result this is not usually a player-friendly practice, as it blocks players from becoming unrestricted free agents. 

NFL Exclusive Franchise Tags as Non-Competes

An exclusive tag gives the team exclusive negotiating rights. This tag comes with a salary of either 120% of the player’s current year’s salary or an average of the top five salaries of players at his position, whichever is greater. While on its face, this may not seem like a bad deal, it is only a one-year contract and eliminates players’ ability to obtain long-term contracts with any other team. Not only are the players barred from negotiating and signing with other teams, but they do not have the option to refuse the tag. If a player refuses the tag, he is then barred from signing with any other team for the entire season. In short, these tags act as an ultimatum with no security.

It should also be noted that the NFL, the most profitable sports league in the world, is the only league that has such restriction. Further, in industries outside of the scope of sports, such restriction would never be enforceable. Indeed, the exclusive tag has been called the “prison tag” by many players in the league, as they maintain other leagues do not have such tags and the tags unfairly control their free agency, earning potential, and ability to be employed by a team of their own choosing. 

Therefore, because the exclusive tag restricts a player’s actions by banning negotiations and employment with other teams, it has the effect of a non-compete clause and will likely be prohibited by the FTC’s proposed rule. 

While it can be argued that the tags are defined and governed by the NFL Players Association (“NFLPA”) Collective Bargaining Agreement (“CBA”) and are therefore organized labor, which has been exempted as non-statutory labor, the FTC’s rule as written does not include any non-statutory labor exemptions. This being said, if the FTC’s proposed rule becomes law and does not exempt the NFL in said capacity, the NFL’s almost three decades of exclusive franchise tagging will come to an end. 

When Could this Happen?

Because the FTC’s proposal is left open for public comment until at least March 10, 2023, and the window teams can tag players for the upcoming season is from February 21 through March 7, 2023, it is likely there will not be any changes for this year/season. Members of the public have the right to ask the FTC be granted additional time to amend the proposal to add and/or omit comments and changes. After this window is closed, the FTC can change, terminate, or make a final rule that must be published by the Federal Register. Then, upon Congressional approval, the rule will become law at least 60 days after the Federal Register’s publication. Accordingly, it could be months before any movement is made in regard to the FTC’s proposed rule on banning non-competes.

The Reality of Deepfakes: The Dark Side of Technology

By: Kayleigh McNiel

We’ve all seen the viral Tom Cruise Deepfake or played around with the face-swapping Snapchat filters. But the dark reality of deepfake technology is far more terrifying than an ever-youthful Top Gun star. 

Deepfakes are images and videos digitally altered using artificial intelligence (AI) and machine learning algorithms to superimpose one person’s face seamlessly onto another’s. They can be incredibly realistic and impossible to detect with the naked eye. Many websites and apps allow anyone with access to a computer to produce images and videos of someone saying or doing something that never actually happened. 

While law-makers and the media have focused their concerns on the potential impact of political deepfakes, nearly all deepfakes online are actually non-consensual porn targeting women. Gaps in the law and easy access to deepfake technology have created a perfect storm, where anyone can make their most perverse fantasy come to life, at the expense of real people.

The Tech Behind The Fakes

Deepfakes are created using generative adversarial networks (GANs) that use AI and two machine learning algorithms (an image generator and an image discriminator) which work in tandem to create and refine the fakes. The process begins by feeding each algorithm the same source data, i.e., images, video, or even audio. Then, the generator iteratively creates new samples with the target image until the discriminator cannot tell whether the generated image is a real image of the target or a fake.

Historically, creating a truly realistic and quality deepfake required dozens of images of a person with enough similarities to the original subject. That is, until July 2022, when Samsung developed MegaPortriats, a technique that creates high-resolution deepfakes from a single image. Now, highly realistic deepfakes can be made from a single innocuous selfie posted online. 

With advancements in technology, detecting deepfakes has become increasingly more difficult. In response, researchers have raced to develop more accurate detection tools. For example, in July 2022 Computer scientists at University of California Riverside created a program that detects manipulated facial expressions in videos and images with up to 99% accuracy. While promising, there is still a long way to go before this or similar detection tools are widely available to law enforcement, consumer protection agencies, and the public. 

The Dark Side of Deepfakes

Realistic deepfakes pose an enormous risk to politicians and fair elections. Many deepfakes have already surfaced of high-profile politicians engaging in acts designed to undermine their credibility. In March 2022, Russian hackers posted a deepfake video of Ukrainian President, Volodymyr Zelenskyy, telling his soldiers to surrender on Ukrainian news outlets and social media. While the video was quickly debunked, it demonstrates how this technology is likely to become a standard tactic used by adversaries to interfere in politics.  

While political deepfakes do pose a very real danger to our democratic institutions, the technology is currently primarily used to victimize women. A 2019 report by Deeptrace confirmed that 96% of all deepfakes online are actually non-consensual porn targeting women and the  number of such deepfakes is rapidly growing. Cybersecurity firm Sensity reports the volume of deepfakes online nearly doubles every six months, largely due to the increase in availability of cheap and easy deepfake technology. Free face-swapping software found on apps like Deepnude, Deepswap, and FaceMagic are commonly used to create deepfake porn. Scammers have even begun using these in extortion cases; threatening to release the fake videos to victims’ family, friends, and employers unless they pay up. 

Having your likeness stolen and used to perform degrading sex acts without your consent is becoming a disturbing reality for celebrities and women in the public eye. A quick Google search reveals nearly a dozen websites with hundreds of deepfake porn videos using the faces of celebrities like Emma Watson, Gal Gadot, and Maisey Williams, among many others. Earlier this month, Twitch streamer, Atrioc, was forced to apologize after he accidentally revealed he used a website dedicated to sharing deepfake porn of popular female streamers, many of whom he is friends with in real-life. 

While celebrities are most at risk, there are websites (which I will not name here) specifically designed for men to create non-consensual deepfake porn of the women in their lives. While no longer publicly active, an anonymous user released an AI bot on right-wing messaging app, Telegram, which rapidly generated thousands of deepfakes of women and underage girls from photos uploaded by men seeking revenge. An investigation by Sensity found that these deepfakes were shared over 100,000 times before the bot was reported to the platform.

To add insult to injury, women who speak out against revenge porn are often the targets of relentless online harassment. Kate Isaacs, a 30-year-old woman from the UK, was the victim of deepfake porn after she successfully campaigned Pornhub to remove nearly 10 million non-consensual and child porn videos. Afterwards she was subjected to humiliating and terrifying harassment from men who “felt they were entitled to non-consensual porn.” They posted her work and home addresses online and threatened to follow her, rape her, and then post the video of it on Pornhub. Shortly thereafter, deepfake porn videos of her began to circulate online. 

Many victims of deepfake and revenge porn are forced to shut down their social media accounts and minimize their online presence to avoid further harassment and embarrassment. Notably, it is somewhat ironic that the men who seek to silence women by creating and sharing these videos often do so under the guise of the First Amendment. The dangers of deepfakes are undeniable, but women have largely been left to fend for themselves.

Our Legal System Is Not Ready for This

The combination of a lack of awareness and the difficulty in detecting deepfakes creates a significant challenge for victims when reporting.. Most law enforcement agencies lack the training and software to confirm that a video is a deepfake. Even if law enforcement can prove it is a forgery, by the time they do so significant damage is already done. People have already seen what they believe to be the victim engaging in degrading sex acts. Those images can never be unseen and will continue to damage victims’ reputations, relationships, and mental health. 

The legal system has been slow to react to the threat women face from deepfake porn. While 48 states and Washington D.C. finally have laws against the creation and distribution of non-consensual “revenge” pornography, only three have specifically banned deepfake porn. In 2018 proposed federal deepfake legislation died in the Senate. The state laws prohibiting deepfakes will likely face huge hurdles from First Amendment and personal jurisdiction challenges:

  • In 2019, Texas was the first State to ban deepfakes, but only those intended to influence elections. 
  • Also in 2019, Virginia amended its “revenge porn” statute to include deepfakes. 
  • In 2020, California prohibited the creation of deepfakes within 60 days of an election and for unauthorized use in pornography.
  • Also in 2020, New York passed a law protecting a person’s likenesses from unauthorized commercial use as well as non-consensual deepfake pornography.

In states without laws against deepfakes, victims will be forced to find relief through a patchwork of consumer privacy protection, defamation, and revenge porn laws. Notably, many state’s revenge porn laws do not apply to deepfakes because the victim’s body is not actually being portrayed. 

Biometric privacy laws could be used to combat deepfake porn in states like Illinois, Texas, Washington, New York, and Arkansas, where residents can file a civil claim against those who use their faceprints, facial mapping, or identifiable images without their consent. Similarly, defamation claims could potentially be brought against the creators of deepfake porn. 

Even if a clearly applicable law exists, bringing any civil claim requires the victim to be able to prove the identity of the video’s creator. This can be incredibly challenging when websites and apps allow users to upload videos with near total anonymity. The bottom line is that current laws do little to deter deepfake creators from continuing to victimize women for their own pleasure. 

What Are Tech Platforms Doing To Fix the Problem They Created?

Furthermore, the tech platforms on which deepfakes are widely shared are completely shielded from any legal liability under Section 230 of the Communications Decency Act. Without any consequences, it has been difficult to get platforms to address the impact that content shared on their site has on people’s lives. Still, some have taken action against deepfakes. In 2018 both Reddit and Pornhub banned deepfake porn, categorizing it as inherently non-consensual. The following year Discord banned the sale of Deepnude, an app designed to remove clothing from women (yes—only women) in photos. Apple removed the Telegram deepfake bot from its iOS for violating its guidelines. Pornhub and YouPorn both redirect users searching for deepfakes to a warning that they’re searching for potentially illegal and abusive sexual material. Users are then provided with directions on how to request the removal of content and resources for victims. Telegram, on the other hand, has never publicly commented on it and has never identified its creator. 

While these efforts are promising, more still needs to be done. Tech companies, lawmakers, and communities must work together to regulate the use of deepfake technology.

If you or someone you know has been the victim of online sex abuse, you are not alone. Support is available through the Cyber Civil Rights Initiative online or via their 24-hour hotline at 1-844-878-2274.