The War on Forgery: An Exploration into Current Technologies Used to Catch Art Fraud

By: Zachary Finn

The field of art authentication has been revolutionized by several new technologies designed to spot fake art. Supposedly, up to fifty percent of all artworks in the market are fake, forged, or misattributed. Forgery is the act of making, exploiting, selling, and peddling fake art. This practice has become one of the most lucrative businesses in the world. According to the US Department of Justice and UNESCO, the crime of art forgery and laundering has been the third highest-grossing criminal commerce in the world over the last 40 years. This is just behind drugs and weapons. As technology has developed over the years, so has a plethora of developments and methods to detect fake and forged art. Many of the new technologies have successfully infiltrated the art crime domain, but they also raise legal implications to consider. 

One of the most encouraging is spectroscopy, which analyzes the chemical composition of the artwork and compares it to the known composition of genuine works from the same period. Spectroscopists test to identify whether certain specific elements and molecules are present in the pigment used to create works of art. For example, scientists use Mass Spectrometry to identify whether lead is present in certain artworks. Throughout early art history, lead was popularly used in paintings. Ancient paintings are identifiable through this technology because lead is now extremely rare and difficult to come by. After discovering the toxic qualities of lead, the art scene was quick to remove lead from its paint belt. Therefore, using spectrometry technology, an individual can spot a forged or fake painting by testing to see the presence of lead or other comparable elements and molecules. If a Da Vinci is without lead, it is almost certainly a fake. Mass spectrometry requires samples from an artwork, which may cause damage. This can create legal disputes over the damage and restoration of the artwork, especially since most of the artwork being tested has historical and cultural significance.

Similar to spectrometry, X-ray fluorescence is another technology that analyzes the elemental composition of art. With this technology, X-rays analyze shine beams on an artwork, which causes atoms in the pigments to emanate ancillary X-rays These rays identify the specific elements, where then experts can determine if they are consistent with materials used in works from the same period. Forgers practice and develop methods of painting over less valuable but still old artworks to create a more valuable fake art. The advantage of using X-ray fluorescence is that it penetrates through layers of paint. This offers scientists and art historians the capability to examine the underlying painting of an artwork. Like mass spectroscopy, X-ray fluorescence raises legal considerations because it potentially damages an artwork in question. On top of this, like most of these technologies, a legal consideration regarding admissibility for evidential purposes emerges. Courts and juries will have to weigh the credibility of experts and these technologies. 

Continuing with scientific technology, Multispectral Imaging uses expert cameras to capture images of an art piece in different wavelengths of light. This allows the examiners to identify inconsistencies that can be indicative of forgery. With multispectral imaging, cameras use different imaging techniques, including ultraviolet and infrared light. UV imaging reveals polishes, touch-ups, and overpainting. Infrared exposes details such as underlying paint jobs. A big advantage of this tool is that it is a non-invasive process so that it does not alter an art’s composition. Delicate and rare artworks may be susceptible to damage by other types of testing, so therefore this technology can be most useful in the war against art forgery. However, this technology also leads to legal questions involving expert opinions and declarations as imaging results are still open to interpretation, and different experts may reach different results as to conclusions of an art’s composition.

In the “most tech-savvy” way to detect forgery, Artificial Intelligence and machine learning algorithms analyze large databases of both genuine and fake art. The AI and machines extract patterns and features that distinguish real and fake art from one another. In a research development by Case Western Reserve University, this technology “combines data from the precise, three-dimensional mapping of a painting’s surface with analysis through artificial intelligence — a computer system based on the human brain and nervous system that can learn to identify and compare patterns.” In one study, AI and machine learning were able to spot forged art with greater than 95% accuracy. A key advantage of using AI and machine learning in art forgery is that large amounts of data can be analyzed and evaluated quickly and efficiently. This expedites spotting potential forgeries and works more accurately and efficiently compared to other methods. However, legal issues involving privacy arise as AI sift through large amounts of datasets that can possibly contain private or unconsented information. As technology evolves, AI algorithms and machine learning can be updated and revised to improve accuracy and proficiency.

The art world has been plagued with crimes of forgery and faking artworks for centuries, but with new technologies such as spectroscopy, X-rays, multispectral imaging, AI, and machine learning, the ability to detect counterfeit art has revolutionized the way experts fight this war against forgery. It will be exciting to see what other technologies emerge in the upcoming years, as well as what new paintings are discovered to be just fake copies.

Talking to Machines – The Legal Implications of ChatGPT

By: Stephanie Ngo

Chat Generative Pre-trained Transformer, known as ChatGPT, was launched on November 30, 2022.  The program has since swept the world by storm with its articulate answers and detailed responses to a multitude of questions. A quick Google Search of “chat gpt” amasses approximately 171 million results. Similarly, in the first five days of launch, more than a million people had signed up to test the chatbot, according to OpenAI’s president, Greg Brockman. But with new technology comes legal issues that require legal solutions. As ChatGPT continues to grow in popularity, it is now more important than ever to discuss how such a smart system could affect the legal field. 

What is Artificial Intelligence? 

Artificial intelligence (AI), per John McCarthy, a world-renowned computer scientist at Stanford University, is “the science and engineering of making intelligent machines, especially intelligent computer programs, that can be used to understand human intelligence.” The first successful AI program was written in 1951 to play a game of checkers, but the idea of “robots” taking on human-like characteristics has been traced back even earlier. Recently, it has been predicted that AI, although prominent now, will permeate the daily lives of individuals by 2025 and seep into various business sectors.  Today, the buzz around AI stems from the fast-growing influx of  emerging technologies, and how AI can be integrated with current technology to innovate products like self-driving cars, electronic medical records, and personal assistants. Many are aware of what “Siri” is, and consumers’ expectations that Siri will soon become all-knowing is what continues to push the field of AI to develop at such fast speeds.

What is ChatGPT? 

ChatGPT is a chatbot that uses a large language model trained by OpenAI. OpenAI is an AI research and deployment company founded in 2015 dedicated to ensuring that artificial intelligence benefits all of humanity. ChatGPT was trained with data from items such as books and other written materials to generate natural and conversational responses, as if a human had written the reply. Chatbots are not a recent invention. In 2019, Salesforce reported that twenty-three percent of service organizations used AI chatbots. In 2021, Salesforce reported the percentage is now closer to thirty-eight percent of organizations, a sixty-seven percent increase since their 2018 report. The effectiveness, however, left many consumers wishing for a faster, smarter way of getting accurate answers.

In comes ChatGPT, which has been hailed as the “best artificial intelligence chatbot ever released to the general public” by technology columnist, Kevin Roose from the New York Times. ChatGPT’s ability to answer extremely convoluted questions, explain scientific concepts, or even debug large amounts of code is indicative of just how far chatbots have advanced since their creation. Prior to ChatGPT, answers from chatbots were taken with a grain of salt because of the inaccurate, roundabout responses that were likely programmed from a template. ChatGPT, while still imperfect and slightly outdated (its knowledge is restricted to information from before 2021), is being used in manners that some argue could impact many different occupations and render certain inventions obsolete.

The Legal Issues with ChatGPT

ChatGPT has widespread applicability, being touted as rivaling Google in its usage. Since the beta launch in November, there have been countless stories from people in various occupations about ChatGPT’s different use cases. Teachers can use ChatGPT to draft quiz questions. Job seekers can use it to draft and revise cover letters and resumes. Doctors have used the chatbot to diagnose a patient, write letters to insurance companies,  and even do certain medical examinations. 

On the other hand, ChatGPT has its downsides. One of the main arguments against ChatGPT is that the chatbot’s responses are so natural that students may use it to shirk their homework or plagiarize. To combat the issue of academic dishonesty and misinformation, OpenAI has begun work on accompanying software and training a classifier to distinguish between AI-written text and human-written text. While not wholly reliable, OpenAI has noted the classifier will become more reliable the longer it is trained.

Another argument that has arisen involves intellectual property issues. Is the material that ChatGPT produces legal to use? In a similar situation, a different artificial intelligence program, Stable Diffusion, was trained to replicate an artist’s style of illustration and create new artwork based upon the user’s prompt. The artist was concerned that the program’s creations would be associated with her name because the training used her artwork.

Because of how new the technology is, the case law addressing this specific issue is limited. In January 2023, Getty Images, a popular stock photo company, commenced legal proceedings against Stability AI, the creators of Stable Diffusion, in the High Court of Justice in London, claiming Stability AI had infringed on intellectual property rights in content owned or represented by Getty Images absent a license and to the detriment of the content creators. A group of artists have also filed a class-action lawsuit against companies with AI art tools, including Stable AI, alleging the violation of rights of millions of artists. Regarding ChatGPT, when asked about any potential legal issues, the chatbot stated that “there should not be any legal issues” as long as the chatbot is used according to the terms and conditions set by the company and with the appropriate permissions and licenses needed, if any. 
Last, but certainly not least, ChatGPT is unable to assess whether the chatbot itself is compliant with the protection of personal data under state privacy laws, as well as the European Union’s General Data Protection Regulation (GDPR). Known by many as the gold-standard of privacy regulations, ChatGPT’s lack of privacy compliance with the GDPR or any privacy laws could have serious consequences if a user feeds ChatGPT sensitive information. OpenAI’s privacy policy does state that the company may collect any communication information that a user communicates with the feature, so it is important for anyone using ChatGPT to pause and think about the impact that sharing information with the chatbot will have before proceeding. As ChatGPT improves and advances, the legal implications are likely to only grow in turn.

Deepfakes – A Disastrous Merger of AI and Porn

fakepornlead

By David O’Hair

First appearing on reddit, a new trend called “deepfakes” has captured the public’s attention with one of the internet’s oldest promises – nude celebrity photos. Intimate celebrity images appearing online is nothing new in and of itself. A 2014 hack exposed hundreds of nude-celebrity images, while Gawker notoriously posted Hulk Hogan’s sex tape.

However, deepfakes present a novel issue in that the images, and often videos, of the celebrities are fake – but the underlying porn is real. Deepfakes use artificial intelligence mixed with facial-mapping software to essentially copy and paste someone’s face into preexisting porn content. The AI-software’s sophistication is such that content created by it, i.e., deepfakes, can be virtually indistinguishable from an authentic porn video featuring a specific celebrity. Celebrities are often the victims of deepfakes, because deepfakes require massive amounts of “raw footage” to import into the pornographic video. Chances are a celebrity has more time collected on video than the average person, but non-public figures can be the victims of deepfakes too. Continue reading

The Key to the YouTube Advertisement Crisis: an Improved AI

maxresdefaultBy Derk Westermeyer

A little over 4 years ago, comedian Ethan Klein uploaded his first video on his YouTube Channel, h3h3productions. That video’s premise was about how people use toilet paper. While this type of comedy may not be for everyone, Ethan’s channel has largely been a success. Since that first video, Ethan has uploaded hundreds more videos to his channel, a large portion of which generate millions of views each. Continue reading

Man or Machine? EU Considering “Rights for Robots”

robotBy Grady Hepworth

Isaac Asimov’s 1942 short story “Runaround” is credited for creating the famous “Three Laws of Robotics.” Asimov’s Laws, although theoretically fictional (and most recently featured in the 2004 motion picture I, Robot), require robots to i) not hurt humans, to ii) obey humans, and to iii) only protect themselves when doing so wouldn’t conflict with the first two rules. However, the European Union (“EU”) made headlines this month when it took steps toward making Asimov’s Laws a reality.
Continue reading