Move Fast and Break Things: Ethical Concerns in AI

By: Taylor Dumaine

In Jurassic Park, Dr. Ian Malcolm famously admonished the park’s creator by saying “your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” Technological advancement for the sake of advancement alone ignores the genuine negative effects that advancement could cause or contribute to. The negative externalities of technological advancement have often been overlooked or ignored. There is also often a reliance on federal and state governments to regulate industry rather than self-regulation or ethics standards.  That reliance has become especially true in the AI and generative AI spaces. The lack of government regulation in AI technology is far outpaced by its rapid development, hindering the government’s ability to address ethical issues adequately.

Relying on government regulation is a copout for large tech companies. Congress’s record on technology regulation is poor at best, with most bills failing to become law and those that do being insufficient to effectively regulate. The United States still does not have a national privacy law and there is little political will to pass one. The increasingly octogenarian makeup of Congress does not have the best track record of actually understanding basic concepts in technology let alone increasingly complicated technology, such as AI, they are tasked with regulating. During Senate testimony regarding the Cambridge Analytical scandal, Meta CEO, Mark Zuckerberg, had to explain some pretty rudimentary internet concepts.

Earlier this year, Open AI CEO, Sam Altman, called for government regulation of AI in testimony before Congress. Altman also carries a backpack around that would allow him to remotely detonate ChatGPT datacenters in the scenario where the generative AI goes rogue. While by no means a perfect example of ethics in the AI space, Altman seems to at least be aware of the risks of his technology. Altman relies on the federal government to regulate his technology rather than engaging in any meaningful self-regulation.

In contrast to Altman, David Holz, Founder and CEO of Midjourney, an image generation AI program,  is wary of regulation, saying in an interview with Forbes, “You have to balance the freedom to do something with the freedom to be protected. The technology itself isn’t the problem. It’s like water. Water can be dangerous, you can drown in it. But it’s also essential. We don’t want to ban water just to avoid the dangerous parts.” Holz highlights that his goal is to promote imagination and is less concerned with how his goal may impact people so long as others benefit. This thinking is common in tech spaces.

 Even the serious issues in generative AI, such as copyright infringement, seem almost mundane when faced with facial recognition tools such as Clearview AI. Dubbed “The Technology Facebook and Google Didn’t Dare Release,” these facial recognition tools have the disturbing ability to recognize faces across the internet. Clearview AI specifically has raised serious Fourth and Fifth Amendment concerns regarding police use of the software. Surprisingly, the large tech companies, Apple, Google, and Facebook, served as de facto gatekeepers of this technology for over a decade due to their acquisitions of facial recognition technology, recognizing the dangers of this technology. Facebook was subject to a $650 million lawsuit related to its use of facial recognition on the platform.  Clearview AI’s CEO Hoan Ton-That has no ethical qualms regarding the technology he is creating and marketing specifically to law enforcement. Clearview AI is backed by Peter Thiel who founded Palantir, which has its own issues regarding police and government surveillance. The potential integration of the two companies could result in an Orwellian situation. Therefore, Clearview AI represents a worst-case scenario for tech without ethical limits, the effects of which have already been disastrous.

Law students, medical students, and Ph.D. students are all required to take an ethics class at some point. Many self-taught programmers do not incorporate ethics classes or study into their learning. There are very real and important ethical concerns when it comes to technology development. In an age, culture, and society that values advancement without taking the time to consider the negative ramifications, it is unlikely that society’s concern over ethics in technology will change much. In a perfect scenario, government regulation would be swift, well-informed, and effective to protect against the dangers of AI. With the rate of technological innovation, it is hard to stay proactive in the ethics space, but that does not mean there should be no attempt to. Arguing for a professional ethics standard in computer science and software engineering is not without its own serious problems and would be almost entirely impossible to implement. However, by creating a culture where ethical concerns are not just valued but considered in the development of new technology, we can hopefully avoid a Jurassic Park scenario.

Liability, Authorship, & Symmetrial Causation in AI-Generated Outputs

By: Jacob Alhadeff

Copyright has insufficiently analyzed causation for both authorship and liability because, until now, causation was relatively obvious. If someone creates a painting, then they caused the work and receive authorial rights. If it turned out that the painting was of Mickey Mouse, then that painter may be liable for an infringing reproduction. However, recent technological advances have challenged the element of causation in both authorship and infringement. In response, recent law and scholarship have begun to address these issues. However, because they have addressed causation in isolation, current analysis has provided logically or ethically insufficient answers. In other words, authorial causation has ignored potential implications for an entity’s infringement liability, and vice-versa. Regardless of how the law responds, generative AI will require copyright to explore and enumerate the previously assumed causation analyses for both infringement and authorship. This blog explores how generative AI exposes the logical inconsistencies that result from analyzing authorial causation without analyzing causation for infringing reproductions.

Generative AI largely requires the following process: (1) an original artist creates works, (2) a developer trains an AI model on these works, and (3) an end-user prompts the AI to generate an output, such as “a mouse in the style of Walt Disney.” This generative AI process presents a novel challenge for copyright in determining who or what caused the output because generative AI challenges conventional notions of creation.

Causing Infringement

Andersen et al. recently filed a complaint against Stability AI, one of the most popular text-to-art foundation models. This class action alleges that Stability AI is directly liable for infringing that result from end-user prompted generations. However, in a recent decision more closely analyzing causation and volition in infringement, the Ninth Circuit found that “direct liability must be premised on conduct that can reasonably be described as the direct cause of infringement.” Stability AI should not be found directly liable for infringing these artists’ copyright, in part because Stability AI cannot reasonably be said to be the direct cause of infringement. Such a finding would be similar to holding Google liable for reproducing images of Mickey Mouse on people’s computer screens when they search for “Mickey Mouse.”  

This lawsuit is particularly relevant since end-users have prompted thousands of generations that include the phrase “Mickey Mouse” and many appear substantially similar to Disney’s Mickey. If thousands of end-users have intentionally prompted the AI to generate Mickey Mouse, then what volitional conduct can most reasonably be described as the direct cause of infringement? It is clearly the end-user. However, what if the end-user simply prompted “a cartoon mouse” and the AI generated an infringing image of Mickey? Here, the end-user may not have intended to generate Mickey and reasonable notions of fairness may not find the end-user as the most direct cause of infringement. However, copyright is a strict liability tort, meaning that liability attaches regardless of a reproducer’s intent. Therefore, unless copyright applies an intentional or a negligence theory for direct liability, which it should not, then whomever or whatever is liable for infringing outputs shall be liable for both of the infringing outputs— “Mickey Mouse” and “a cartoon mouse.” Such an outcome not only feels deeply unfair, but it is unreasonable to say that the end-user is the direct cause of infringement when prompting “a cartoon mouse,” and vice versa. 

Cases called to answer similar questions have recently grappled with these same issues of volition and causation. Generally, courts have been hesitant to find companies liable for actions that are not reasonably deemed volitional conduct causing infringement. The court in Cartoon Network, for example, found that “volition is an important element of direct liability.” In the Loopnet case, the court found that “the Copyright Act… requires conduct by a person who causes in some meaningful way an infringement.” In this way, the law has so far mirrored our prior intuitions of fairness. Legal scholarship has noted that when copyright law has grappled with novel technology, it has found that causation in infringement requires volition that “can never be satisfied by machines.” This reasoning, as applied  to generative AI, may mean that an AI company should not normally be directly liable for outputs that infringe the reproduction right. 

Causing Authorship

This causation analysis has also begun for authorship rights. One copyright scholar compellingly argues that copyright law should explicitly enumerate a causal analysis for granting authorship rights. Such an analysis would follow tort law’s two step causation analysis including: (1) creation in fact and (2) legal creation. Aviv Gaon surveys authorial options in The Future of Copyright in the Age of AI, writing that there are those that favor assigning authorship to the end-user prompter, the AI developer, finding outputs joint works, or even attributing authorship to the AI itself. The simplest legal option would be to treat AI like a tool and grant authorship to the end-user. This is exactly how the law responded when photography challenged conventional notions of creativity and authorship. Opponents of finding photographers as authors argued that photography was “merely mechanical, with no place for… originality.” The Supreme Court in Burrow Giles instead found that the photographer “gives effect to the idea” and is the work’s “mastermind” deserving of copyright. 

However, treating AI like a conventional tool is an inconsistent oversimplification in the current context. Not only is it often less analogous to say that an end-user prompter is the ‘mastermind’ of the output, but AI presents a more attenuated causation analysis that should not result in  a copyright for all AI-generations. As an extreme example, recent AIs are employing other AIs as replicable agents. In these circumstances, a single prompt could catalyze one AI to automatically employ other AI agents to generate numerous potentially creative or infringing outputs. Here, the most closely linked human input would be a prompt that could not be said to have masterminded or caused the many resultant expressive outputs. Under Balganesh’s framework, no human could reasonably be found as the factual or legal cause of the output. Such use-cases will further challenge the law’s notions of foreseeability as reasonable causation becomes increasingly attenuated.

Importantly, in the face of this ongoing debate and scholarship, the Copyright Office recently made their determination on authorship for AI-generated works. In February 2023, the US Copyright Office amended its decision regarding Kristina Kashtanova’s comic book, Zarya of the Dawn, stating that the exclusively AI-generated content is not copyrightable.  Ms. Kashtanova created her comic book using Midjourney, a text-to-art AI, to generate much of the visual art involved. The copyright office stated that her “selection, coordination, and arrangement” of AI-generated images are copyrightable, but not the images themselves. The Office’s decision means that all exclusively AI-generated content, like natural phenomena, is not the type of content copyright protects and is freely accessible to all. The Office’s decision was based on their interpretation that “it was Midjourney—not Kashtanova—that originated the ‘traditional elements of authorship.’” The Office’s decision is appropriate policy, but when analyzed in conjunction with the current law on causation in infringement, it is inconsistent and may result in an asymmetrical allocation of the rights and duties that attend creation. Relevantly, how can a machine that is incapable of volition originate art? This is one of many ontological paradoxes that AI will present to law. 

Symmetrically Analyzing Causation

Two things are apparent. First, there is a beautiful symmetry in AI-generations being uncopyrightable, and the machines originating such works symmetrically do not have sufficient volition to infringe. If such a system persists, then copyright law may not play a major role in generative AI, though this is doubtful. Second, such inconsistencies inevitably result from causation analyses for mechanically analogous actions that only analyze one of infringement or authorship. Instead, I propose that copyright law symmetrically analyze mechanically analogous causation for both authorship and infringement of the reproduction right. Since copyright law has only recently begun analyzing causation, it is reasonable, and potentially desirable, that the law does not require this symmetrical causation. After all, the elements of authorship and infringement are usefully different. However, what has been consistent throughout copyright is that when an author creates, they risk both an infringing reproduction and the benefits of authorship rights. In other words, by painting, a painter may create a valuable copyrightable work, but they also may paint an infringing reproduction of Mickey Mouse. Asymmetrical causation for AI art could be analogized to the painter receiving authorship rights while the company that made the paintbrush being liable for the painter’s infringing reproductions. Such a result would not incentivize a painter to avoid infringement, and thereby improperly balance the risks and benefits of creation. Ultimately, if the law decides either the end-user or the AI company is the author, then the other entity should not be asymmetrically liable for infringing reproductions. Otherwise, the result will be ethically and logically inconsistent. After all, as Antony Honore wrote in Responsibility and Fault, in our outcome-based society and legal system, we receive potential benefit from and are responsible for the harms reasonably connected to our actions.

Regulating Emerging Technology: How Can Regulators Get a Grasp on AI?

By: Chisup Kim

Uses of Artificial Intelligence (“AI”), such as ChatGPT, are fascinating experiments that have the potential to morph their user’s parameters, requests, and questions into answers. However, as malleable these AIs are to user requests, governments and regulators have not had the same flexibility in governing this new technology. Countries have taken drastically different approaches to AI regulations. For example, on April 11, 2023, China announced that AI products developed in China must undergo a security assessment to ensure that content upholds “Chinese socialist values and do[es] not generate content that suggests regime subversion, violence or pornography, or disput[ions to] to economic or social order.” Italy took an even more cautionary stance, outright banning ChatGPT. Yet domestically, in stark contrast to the decisive action taken by other countries, the Biden Administration has only begun vaguely examining whether there should be rules for AI tools.

In the United States, prospective AI regulators seem to be more focused on the application of AI tools to a specific industry. For example, the Equal Employment Opportunity Commission (“EEOC”) has begun an initiative to examine whether AI in employment decisions comply with federal civil rights laws. On autonomous vehicles, while the National Highway Traffic Safety Administration (“NHTSA”) has not yet given autonomous vehicles the green light exemption from occupant safety standards, they do maintain a web page open to a future with automated vehicles. Simultaneously, while regulators are still trying to grasp this technology, AI is entering every industry and field in some capacity. TechCrunch chronicled the various AI applications from Y Combinator’s Winter Demo Day. TechCrunch’s partial list included the following: an AI document editor, SEC-compliance robo-advisors, Generative AI photographer for e-commerce, automated sales emails, an AI receptionist to answer missed calls for small companies, and many more. While the EEOC and NHTSA have taken proactive steps for their own respective fields, we may need a more proactive and overarching approach for the widespread applications of AI. 

Much like their proactive GDPR regulations in privacy, the EU proposed a regulatory framework on AI. The framework proposes a list of high-risk applications for AI, and creates more strenuous obligations for those high-risk applications and tempered regulations for the limited and no risk applications of AI. Applications identified as high-risk include the use of AI in critical infrastructure, education or vocational training, law enforcement, and administration of justice. High-risk applications would require adequate risk assessment and mitigation, logging of data with traceability, and clear notice and information provided to the user. ChatBots are considered limited risk but require that the user has adequate notice that they’re interacting with a machine. Lastly, the vast majority of AI applications are likely to fall under the “no risk” bucket for harmless applications, including applications such as video games or spam filters. 

If U.S. regulators fail to create a comprehensive regulatory framework for AI, they will likely fall behind on this issue, much like they have fallen behind on privacy issues. For example, with privacy, the vacuum of guidance and self-regulating bodies forced many states and foreign countries to begin adopting GDPR-like regulations. The current initiatives by the EEOC and NHTSA are applaudable, but these organizations seem to be waiting for actual harm to occur before taking proactive steps to regulate the industry. For example, last year, NHTSA found that the Tesla autopilot system, among other driver-assisted systems, was linked to nearly 400 crashes in the United States with six fatal accidents. Waiting for the technology to come to us did not work for privacy regulations; we should not wait for AI technology to arrive either.

AI Art: Infringement is Not the Answer

By: Jacob Alhadeff

In the early 2000s, courts determined that the emerging technology of peer-to-peer “file-sharing” was massively infringing and categorically abolished its use. Here, the Ninth Circuit and Supreme Court found that Napster, Aimster, and Grokster were secondarily liable for the reproductions of their users. Each of these companies facilitated or instructed their users on how to share verbatim copies of media files with millions of other people online. In this nascent internet, users were able to download each other’s music and movies virtually for free. In response, the courts held these companies liable for the infringements of their users. In so doing, they functionally destroyed that form of peer-to-peer “file-sharing.” File-sharing and AI are in not analogous, but multiple recent lawsuits present a similarly existential question for AI art companies. Courts should not find AI art companies massively infringing and risk fundamentally undermining these text-to-art AIs.

A picture containing text, person, person, suit

Description automatically generated

Text-to-art AI, aka generative art or AI art, allows users to type in a simple phrase, such as “a happy lawyer,” and the AI will generate a nightmarish representation of this law student’s desired future. 

Currently, this AI art functions only because (1) billions of original human authors throughout history have created art that has been posted online, (2) companies such as Stability AI (“Stable Diffusion”) or Open AI (“Dall-E”) download/copy these images to train their AI, and (3) end-users prompt the AI, which then generates an image that corresponds to the input text. Due to the large data requirements, all three of these steps are necessary for the technology, and finding either the second or third steps generally infringing poses and existential threat to AI Art. 

In a recent class action filed against Stability AI, et al (“Stable Diffusion”), plaintiffs allege that Stable Diffusion directly and vicariously infringed on the artist’s copyright through both the training of the AI and the generation of derivative images, i.e., steps 2 and 3 above. Answering each of these claims requires complex legal analyses. However, functionally, a finding of infringement on any of these counts threatens to fundamentally undermine the viability of text-to-art AI technology. Therefore, regardless of the legal analysis (which likely points in the same direction anyways) courts should not find Stable Diffusion liable for infringement because doing so would contravene the constitutionally enumerated purpose of copyright—to incentivize the progress of the arts. 

In general, artists have potential copyright infringement claims against AI Art companies (1) for downloading their art to train their AI and (2) for the AI’s substantially similar generations that the end-user prompts. In the conventional text-to-art AI context, these AI art companies should not be found liable for infringement in either instance because doing so would undermine the progress of the arts. However, a finding of non-infringement leaves conventional artists with unaddressed cognizable harms. Neither of these two potential outcomes are ideal. 

How courts answer these questions will shape how AI art and artists function in this brave new world of artistry. However, copyright infringement, the primary mode of redress that copyright protection offers, does not effectively balance the interests of the primary stakeholders. Instead of relying on the courts, Congress should create an AI Copyright Act that protects conventional artistry, ensures AI Art’s viability, and curbs its greatest harms. 

Finding AI Art Infringing Would Undermine the Underlying Technology

A finding of infringement for the underlying training or the outputs undermines AI Art for many reasons: copyright’s large statutory damages, the low bar for granting someone a copyright, that works are retroactively copyrightable, the length of copyright, and the volume of images the AI generates and needs for training.

First, copyright provides statutory damages of $750 to $30,000 and up to $150,000 if the infringement is willful. Determining the statutory value of each infringement is likely moot because of the massive volume of potential infringements. Moreover, it is likely that if infringement is found, AI art companies would be enjoined from functioning, as occurred in the “file-sharing” cases of the early 2000s. 

Second, the threshold for a copyrightable work is incredibly low, so it is likely that many of the billions of images used in Stable Diffusion’s training data are copyrightable. In Feist, the Supreme Court wrote, “the requisite level of creativity is extremely low [to receive copyright]; even a slight amount will suffice. The vast majority of works make the grade quite easily.” This incredibly low bar means that each of us likely creates several copyrightable works every day. 

Third, works are retroactively copyrightable, meaning that the law does not require the plaintiff to have registered their work with the copyright office to receive their exclusive monopoly. Therefore, an author can register their copyright after they are made aware of an infringement and still have a valid claim. If these companies were found liable, then anyone with a marginally creative image in a training set would have a potentially valid claim against a generative art company.

Fourth, the copyright monopoly lasts for 70 years after the death of the author. Therefore, many of the copyrights in the training set have not lapsed. Retroactive copyright registration combined with the extensive duration of copyrightability means that few of the training images are likely in the public domain. In other words, “virtually all datasets that will be created for ML [Machine Learning] will contain copyrighted materials.”

Finally, as discussed earlier, the two bases for infringement claims against the AI art companies are (1) copying to train the AI and (2) copying in the resultant end generation. Each basis would likely result in billions or millions of potential claims, respectively. First, Stable Diffusion is trained on approximately 5.85 billion images which they downloaded from the internet. Given these four characteristics of copyright, it is likely that if infringement were found, many or all of the copyright owners of these images would then have a claim against AI art companies. Second, regarding infringement of end generations, Dall-E has suggested that their AI produces millions of generations every day. If AI art companies were found liable for infringing outputs, then any generation that was found to be substantially similar to an artist’s copyrighted original would be the basis of another claim against Dall-E. This would open them up to innumerable infringement claims every day. 

A picture containing text, fabric

Description automatically generated

At the same time, generative art is highly non-deterministic, meaning that, on its face, it is hard to know what the AI will generate before it is generated. The AI’s emergent properties, combined with the subjective and fact-specific “substantial similarity” analysis of infringement, do not lend themselves to an AI Art company ensuring that end-generations are non-infringing. More simply, from a technical perspective, it would be near-impossible for an AI art company to guarantee that their generations do not infringe on another’s work. 

Finding AI art companies liable for infringement may open them up to trillions of dollars in potential copyright lawsuits or they may simply be enjoined from functioning.

An AI Copyright Act

Instead, Congress should create an AI Copyright Act. Technology forcing a reevaluation of copyright law is not new. In 1998, Congress passed the DMCA (Digital Millennium Copyright Act) to fulfill their WIPO (World Intellectual Property Organization) treaty obligations, reduce piracy, and facilitate e-commerce. While the DMCA’s overly broad application may have stifled research and free speech, it does provide an example of Congress recognizing copyright’s limitations in addressing technological change and responding legislatively. What was true in 1998 is true today. 

Finding infringement for a necessary aspect of text-to-art AI may fundamentally undermine the technology and run counter to the constitutionally enumerated purpose of copyright—“to promote the progress of science and useful arts.” On the other hand, finding no infringement leaves these cognizably harmed artists without remedy. Therefore, Congress should enact an AI Copyright Act that balances the interests of conventional artists, technological development, and the public. This legislation should aim to curb the greatest harms posed by text-to-art AI through a safe harbor system like that in the DMCA. 

The War on Forgery: An Exploration into Current Technologies Used to Catch Art Fraud

By: Zachary Finn

The field of art authentication has been revolutionized by several new technologies designed to spot fake art. Supposedly, up to fifty percent of all artworks in the market are fake, forged, or misattributed. Forgery is the act of making, exploiting, selling, and peddling fake art. This practice has become one of the most lucrative businesses in the world. According to the US Department of Justice and UNESCO, the crime of art forgery and laundering has been the third highest-grossing criminal commerce in the world over the last 40 years. This is just behind drugs and weapons. As technology has developed over the years, so has a plethora of developments and methods to detect fake and forged art. Many of the new technologies have successfully infiltrated the art crime domain, but they also raise legal implications to consider. 

One of the most encouraging is spectroscopy, which analyzes the chemical composition of the artwork and compares it to the known composition of genuine works from the same period. Spectroscopists test to identify whether certain specific elements and molecules are present in the pigment used to create works of art. For example, scientists use Mass Spectrometry to identify whether lead is present in certain artworks. Throughout early art history, lead was popularly used in paintings. Ancient paintings are identifiable through this technology because lead is now extremely rare and difficult to come by. After discovering the toxic qualities of lead, the art scene was quick to remove lead from its paint belt. Therefore, using spectrometry technology, an individual can spot a forged or fake painting by testing to see the presence of lead or other comparable elements and molecules. If a Da Vinci is without lead, it is almost certainly a fake. Mass spectrometry requires samples from an artwork, which may cause damage. This can create legal disputes over the damage and restoration of the artwork, especially since most of the artwork being tested has historical and cultural significance.

Similar to spectrometry, X-ray fluorescence is another technology that analyzes the elemental composition of art. With this technology, X-rays analyze shine beams on an artwork, which causes atoms in the pigments to emanate ancillary X-rays These rays identify the specific elements, where then experts can determine if they are consistent with materials used in works from the same period. Forgers practice and develop methods of painting over less valuable but still old artworks to create a more valuable fake art. The advantage of using X-ray fluorescence is that it penetrates through layers of paint. This offers scientists and art historians the capability to examine the underlying painting of an artwork. Like mass spectroscopy, X-ray fluorescence raises legal considerations because it potentially damages an artwork in question. On top of this, like most of these technologies, a legal consideration regarding admissibility for evidential purposes emerges. Courts and juries will have to weigh the credibility of experts and these technologies. 

Continuing with scientific technology, Multispectral Imaging uses expert cameras to capture images of an art piece in different wavelengths of light. This allows the examiners to identify inconsistencies that can be indicative of forgery. With multispectral imaging, cameras use different imaging techniques, including ultraviolet and infrared light. UV imaging reveals polishes, touch-ups, and overpainting. Infrared exposes details such as underlying paint jobs. A big advantage of this tool is that it is a non-invasive process so that it does not alter an art’s composition. Delicate and rare artworks may be susceptible to damage by other types of testing, so therefore this technology can be most useful in the war against art forgery. However, this technology also leads to legal questions involving expert opinions and declarations as imaging results are still open to interpretation, and different experts may reach different results as to conclusions of an art’s composition.

In the “most tech-savvy” way to detect forgery, Artificial Intelligence and machine learning algorithms analyze large databases of both genuine and fake art. The AI and machines extract patterns and features that distinguish real and fake art from one another. In a research development by Case Western Reserve University, this technology “combines data from the precise, three-dimensional mapping of a painting’s surface with analysis through artificial intelligence — a computer system based on the human brain and nervous system that can learn to identify and compare patterns.” In one study, AI and machine learning were able to spot forged art with greater than 95% accuracy. A key advantage of using AI and machine learning in art forgery is that large amounts of data can be analyzed and evaluated quickly and efficiently. This expedites spotting potential forgeries and works more accurately and efficiently compared to other methods. However, legal issues involving privacy arise as AI sift through large amounts of datasets that can possibly contain private or unconsented information. As technology evolves, AI algorithms and machine learning can be updated and revised to improve accuracy and proficiency.

The art world has been plagued with crimes of forgery and faking artworks for centuries, but with new technologies such as spectroscopy, X-rays, multispectral imaging, AI, and machine learning, the ability to detect counterfeit art has revolutionized the way experts fight this war against forgery. It will be exciting to see what other technologies emerge in the upcoming years, as well as what new paintings are discovered to be just fake copies.