Massive Tech Layoffs Negatively Impact H1-B Visa Workers and Immigration

By: Talia Cabrera

At the beginning of 2023, no one would have expected that the U.S. tech sector would be in the headlines for laying off thousands of tech workers. Tech giants like Google, Meta, Disney, and Microsoft were faced to deal with the consequences of inflation and potential recession after the pandemic. Even Amazon was not free from the wave of layoffs after their profits increased 220 percent during the first year of the pandemic. Collectively, the U.S. tech sector has laid off more than 150,000 workers. So why are we seeing tech companies layoff their workers? Rapid hiring because of fast growth without a care in the world about the implications of a workers life, especially H-1B visa holders.

Though layoffs are meant to alleviate the financial burden companies are left to deal with, they unfortunately disrupt a worker’s life with just a simple email. Workers who no longer have a career are now left to start over and find a new job during a time when companies are freezing hiring. Though these layoffs have had a negative impact on thousands of people, one group of workers is left in a unique position: US immigrants holding an H-1B visa.

The H-1B visa is a work visa that allows U.S. employers to sponsor a foreign worker to work in the U.S. for a specific period of time. These “specialized skill” visas are heavily used by large tech companies and have contributed to their success. For example, in 2021, Amazon was approved for over 4,800 H-1B visas, Miscroft was approved for 1,200 H-1B visas, and Apple had over 1,000. Yet, the recent wave of tech layoffs has shown us the lack of support H-1B visa holders have when the unexpected happens. Once an H-1B visa holder is told that they no longer have a job, they have to face the harsh reality of a limited time period to find a new job. If an H-1B visa is unable to find a new employer within a 60-day window, they may be forced to leave the United States and return to their home country.

But the reality is that many of these visa holders have built a life in the United States for years and are now facing the uncertainty of being deported. These visa holders have invested time and resources in their careers in the U.S. and many of them have built a family and community. Now, the post-pandemic economy is highlighting how this system needs to be updated. In the recent economic climate, the hiring freeze is leaving visa holders concerned about their future in the United States, especially now having to compete in an already competitive work sector.

So what needs to change? There need to be more resources in place to help H-1B visa workers during layoffs. Tech companies have invested millions of dollars into lobbying for visa workers to invest in innovation so they need to make sure they support them in their transition period. Now, tech companies should facilitate a smooth transition or risk losing future generations of skilled workers. Maybe tech companies need to lobby for workers to extend the 60-day window or keep them as sponsors until a new company can sponsor that so they can continue working for citizenship. If tech companies want to use H-1B visa holders then they need to not take advantage of them and leave them left with nothing.

The recent wave of tech layoffs in 2023 has had a significant impact on many workers and has highlighted the lack of support for tech workers. Employers and policymakers need to stop using greed as a motivating factor for innovation and instead make sure their workers are taken care of. But until then, we will see big tech companies concerned about making money without a care in the world.

Alexa: Are You Going to Testify Against Me?

By: Melissa Torres

Life seems pretty great in a world where we can turn lights off, play music, and close the blinds by simply speaking it into existence. But, what happens when your conversations or home noises are used against you in a criminal investigation? 

Smart speakers, such as Google Home and Amazon Alexa, are marketed as great tech gifts and the perfect addition to any home. A smart speaker is a speaker that can be controlled with your voice using a “virtual assistant”. It can answer questions for you, perform various automated tasks and control other compatible smart devices by simply activating its “wake word.”

According to Amazon.com, in order for a device to start recording, the user has to awaken the device by saying the default word, “Alexa.” The website states, “You’ll always know when Alexa is recording and sending your request to Amazon’s secure cloud because a blue light indicator will appear or an audio tone will sound on your Echo device.” Unless the wake word is used, the device does not listen to any other part of your conversations as a result of built-in technology called “keyword spotting”, according to Amazon.

Similarly, Google states, “Google Assistant is designed to wait in standby mode until it detects an activation, like when it hears ‘Hey Google.’ The status indicator on your device will let you know when Google Assistant is activated. When in standby mode, it won’t send what you’re saying to Google servers or anyone else.” 

Consumers consent to being recorded when they willingly enter a contract with these smart devices by clicking “I agree to the terms and conditions.” However, most people assume this refers only when implicating the “wake word.” Despite assurances from tech giants that these devices do not record without being prompted, there have been many reports that suggest otherwise. And recent in years, these smart devices have garnered attention as they have been called as the star witness in murder investigations.  

In October 2022, someone fatally shot two researchers before setting fire to the apartment they were found in. According to the report, Kansas police believe the killer was inside the apartment with the duo for several hours, including before and after their deaths. Investigators found an Amazon Alexa device inside the apartment and filed a search warrant for access to the device’s cloud storage, hoping it may have recorded clues as to who is responsible for the murders. If the police obtain relevant information, they may be able to use it in court, depending on how this evidence is classified.

Under the Federal Rules of Evidence, all relevant evidence is admissible unless another rule specifies otherwise. Specifically, statements that are considered hearsay are not admissible unless an exception applies. Hearsay is any statement made outside the presence of court by a person for the purpose of offering it to prove the truth of the matter asserted. Although these devices technically do produce statements, courts have held that a statement is something uttered by a  person, not a machine. However, there is an important distinction between machines that have computer stored and computer generated data. Computer stored data that was entered by a human has the potential to be hearsay, while computer generated data without the assistance or input of a person is not considered hearsay.  The question of how these statements will be classified and whether they will be permitted in court is up to the judge. 

As such, this isn’t the first time police have requested data from a smart speaker during a murder investigation. In 2019, Florida police obtained search warrants for an Amazon Echo device believing it may have captured crucial information surrounding an alleged argument at a man’s home that ended in his girlfriend’s death. In 2017, a New Hampshire judge ordered Amazon to turn over two days of Amazon Echo recordings in a case where two women were murdered in their home. In these previous cases, the parties consented to handing over the data held on these devices without resistance. In 2015, however, Amazon pushed back when Arkansas authorities requested data over a case involving a dead man floating in a hot tub. Amazon explained that while it intends not to obstruct the investigation, it also seeks to protect its consumers First Amendment rights. 

According to the complaint, Amazon’s legal team wrote, “At the heart of that First Amendment protection is the right to browse and purchase expressive materials anonymously, without fear of government discovery,” later explaining that the protections for Amazon Alexa were twofold: “The responses may contain expressive material, such as a podcast, an audiobook, or music requested by the user. Second, the response itself constitutes Amazon’s First Amendment-protected speech.” Ultimately, the Arkansas court never decided on the issue as the implicated individual offered up the information himself.      

Thus, a question is still unanswered: Exactly how much privacy can we reasonably expect when installing a smart speaker? As previously mentioned, these smart speakers have been known to activate without the use of a “wake word”, potentially capturing damning conversations. Without a specified legal standard, there’s not much consumers can do to protect their private information from being shared as of now, fueling the worry that these devices can be used against them. Tech companies, like Amazon and Google, suggest going into the settings and turning off the microphone when you aren’t using it, but that requires trusting the company to actually honor those settings. Users also have the option to review and delete recordings, but again you have to trust the company to honor this. The only sure way to protect yourself from these devices is by simply not purchasing them. If you can’t bring yourself to do that, be sure to unplug the devices when you’re not using them. Otherwise, it’s possible these smart speakers may be used as evidence against you in court.

Can college sports afford pay-to-play?

By: Kyle Kennedy

Earlier this month, the NCAA asked the 3rd Circuit to block a federal lawsuit brought against them by multiple former student-athletes spearheading a legal effort to have student-athletes treated as paid employees by their schools. This effort would essentially require schools to compensate their student-athletes as employees and subject the schools to labor regulations. Judge Theodore McKee, one sitting member of the 3rd Circuit panel hearing the motion, indicated that student-athletes could be considered employees under the Fair Labor Standards Act (FLSA). The FLSA covers individual employees whose work regularly involves them in interstate commerce, including travel to other states to do their jobs. The NCAA limits teams from practicing more than 20 hours per week, but student-athletes reported spending between 35 and 40 hours per week on their sport. Student-athletes travel interstate for competitions and essentially work on their sport full-time in addition to their classes and other responsibilities, which tends to point towards employee status. In September of 2021, the National Labor Relations Board released a memo through their general counsel Jennifer Abruzzo which stated that college athletes should be treated as employees of the school. 

In 2021, the NCAA generated $1.15 billion dollars in revenue, with $850 million coming from the rights to televise March Madness. Despite their overloaded schedules, a 2019 study by the National College Players associated reported that 85% of college athletes living on campus and 86% living off campus live below the federal poverty line. Most college athletes do not receive full scholarships; the average award for a Division I athlete was $18,013 for males and $18,722 for females. For Division II athletes those averages dropped to $6,588 for males and $8,054 for females, and Division III schools are prohibited from offering athletic scholarships. While the NCAA has recently approved a policy allowing athletes to be compensated for their name, image, and likeness, these profits have mostly been directed to high-profile athletes in profit-bearing sports who already are likely receiving large or full ride scholarships. 

The lawyers for the athletes are not seeking a large reward or to cut a chunk from the pie of NCAA profits. Instead, they are simply seeking to have athletes paid at a reasonable hourly wage like students who work in the libraries or dorms as a part of work-study programs.  The NCAA in arguing for dismissal stated that paying college athletes is a slippery slope, that it may lead to schools cutting less profitable sports, and that qualifying the student-athletes as paid employees could expose their scholarships to taxation.  There is certainly some truth to these concerns, as Judge McKee of the 3rd circuit offered that the court may take the stance that some athletes, such as “the quarterback at the SEC school,” would be considered employees while other athletes are not. 

This could create huge complications among college athletic departments because football and men’s basketball are often the main sources of revenue for smaller school’s entire athletics budgets. These schools could essentially be forced to eliminate all or many of their smaller sports to be able to afford to pay the athletes that fall under the FLSA. It’s also unclear how this dichotomy of employee and non-employee athletes would interact with other NCAA regulations such as Title IX, which promotes equality in sports by requiring equalized investment. If sports like football and basketball were to be excluded from the calculation because the athletes are employees, this would lead to a huge loss in gender equity in sports because schools would be able to pour money into their men’s basketball, football, and other profit-bearing programs without spreading the funding among women’s teams and less profitable sports.

For many student-athletes, especially those in smaller sports, college athletics is not just about a scholarship or advancing an athletic career. Most athletes in these sports are competing for far less than full scholarship and choose to pursue their sport because of their passion. While it’s important to acknowledge that athletes in profit-bearing sports have traditionally been taken advantage of by the NCAA under the guise of amateurism, the recent changes to NCAA policy to allow name, image, and likeness deals allow high-profile athletes to reap the value of their market worth. If a pay-for-play structure truly threatens the existence of these smaller sports at the college level, then perhaps the newly minted name, image, and likeness policy of the NCAA will have to serve as a placeholder for the compensation of athletes, or at least those with market value. Additionally, the formal consideration of student-athletes as employees of their schools under the FLSA raises a host of unanswered questions requiring a massive overhaul of current individual school policies and practices. Regardless of one’s opinion on the way the case should turn out, college athletics departments and legal scholars alike will be carefully tracking this case and its possible future implications.

Copyright Law (Taylor’s Version)

By: Melissa Torres

Are you ready for it? Taylor Swift is reportedly set to kick off 2023 with the release of a new album, Speak Now (Taylor’s Version). Despite just releasing the fastest-selling album of 2022, Midnights, fans have been speculating about which one of her early albums she’ll rerecord next for quite awhile. Reports state, “Taylor has quietly been in the studio working on remaking both Speak Now and 1989. All details are still being ironed out but Speak Now (Taylor’s Version) should be out within the next couple of months, before she kicks off her Eras world tour.” 

But why is Taylor Swift rerecording old albums?  

While it may seem obvious to the general public that the writer, composer, and performer of a song would then own the recording of the song, the music industry functions on a different set of rules formed by contracts and copyrights. When a new artist signs with a record label, they form a contract which specifies the intellectual property rights of the works created. 

Copyright is a type of intellectual property that protects original works of authorship as soon as an author fixes the work in a tangible form of expression. Common types of work include photographs, illustrations, books, and music. These works are fixed when they are captured in a “sufficiently permanent medium such that the work can be perceived, reproduced, or communicated for more than a short time.” U.S. copyright law provides copyright owners with a list of exclusive rights and also provides owners of copyright the right to authorize others to exercise these exclusive rights, subject to certain statutory limitations. 

Typically, in the music industry, copyrights are divided between the musical composition of a song and its sound recording. The musical composition refers to the lyrics of a song, the music itself, or both. The sound recording, also known as the master, is the recorded performance of the song. As a result, more often than not, an artist’s record label owns the master of a song.  

In Swift’s case, she signed with record label Big Machine Records in 2005 and formed a contract in which one of the stipulations was that Big Machine would own the rights to the sound records in perpetuity. After the deal ended in 2018, Swift moved on and signed to a different label. Her recordings made over the 13 years stayed with Big Machine, and the label sold the rights to them for $300 million to Scooter Bruan in 2019. Swift alleges she was never given the opportunity to purchase these rights. Despite writing and performing over 82 songs, she has no rights to those records and receives no payment anytime they are played. Therefore, the singer embarked on a mission to rerecord her first six records in order to own both the musical composition and master of the new recordings. 

Because Swift has written every single song released in those six albums and therefore owns the musical composition copyright, she retains the “sync rights” of her music. A synchronization license is needed for a song to be reproduced onto a television program, film, video, commercial, radio, or even a phone message. Permission from the owner of the master use license, typically the record company, also needs to be obtained if a specific recorded version of a composition is used for such a purpose. As a result, everytime these songs are used for commercial purposes, the owner of the masters earns a profit. 

By rerecording versions of her old hits, Swift will now hold the master and composition rights of these songs. To be clear, the original masters of these songs still exist, but by encouraging fans to stream the newer recorded version, Swift is able to reclaim any income that may have gone toward songs previously owned by her former label. 

What can we learn from Swift?

Swift’s case provides several important lessons to creators about the importance of intellectual property rights. Situations such as these, while not usually on the same scale, are relatively common in the entertainment industry. Prince, Kesha, and The Beatles are just some of the many artists who have fought for ownership rights of their music.  Artists need to be careful when entering contracts in order to protect their intellectual property rights. Intellectual property is valuable, and it is crucial artists recognize the significance of protecting their rights. Without intellectual property protection, artists would not be fully compensated for their creations. As a result, artists’ desire to produce new work would decline and cultural innovation would suffer. Moreover, creators should never rush to sign a contract before consulting a legal professional and fully understanding the future implications of each clause, as they can have enormous ramifications. The document that Swift signed in 2005 is still affecting not only her life, but the music industry today. Despite the legal hurdles Swift has dealt with, she is ultimately able to survive and profit off recreating her old music. Swift’s strong fan base has rallied behind her by promoting her rerecorded music and has helped her continue a career as one of the most successful female artists of the decade. 

Is AI Good in Moderation?

By: Chisup Kim

In 2016, Microsoft released Tay, a chatbot based on artificial intelligence on Twitter that became smarter as users interacted with it. Unfortunately, this experiment did not last long, as some Twitter users coordinated a barrage of inappropriate tweets towards Tay to force the chatbot to parrot out racist and sexist tweets. Tay tweeted racial slurs, support for gamergate, and incredibly offensive positions within a matter of hours of being online. Last week, Microsoft returned to the AI space by launching a new AI-powered Bing search engine in partnership with OpenAI, the developers of ChatGPT. Unlike Tay, the Bing Search AI is designed as a highly-powered assistant that summarizes relevant articles or provides related products (e.g., recommending an umbrella for sale with a rain forecast). While many news outlets and platforms are specifically focused on reporting on whether the Bing AI chatbot is sentient, the humanization of an AI-powered assistant creates new questions about the liability that could be created by the AI’s recommendations. 

Content moderation itself is not an easy task technically. While search engines are providing suggestions based on statistics, search engine engineers also run parallel algorithms to “detect adult or offensive content.” However, these rules may not cover more nefariously implicit searches. For example, a search engine likely would limit or ban explicit searches for child pornography. However, a user may type, for example, “children in swimsuits” to get around certain parameters, while simultaneously influencing the overall algorithm. While the influence may not be as direct or to the same extent as Tay on Twitter, AI machine learning algorithms incorporate user behavior into their future outputs that taint the search experience for the original intended audience. In this example, tainted search results influenced by the perverted could affect the results for a parent looking to buy an actual swimsuit for their child with photos depicting inappropriate poses. Around five years ago, Bing was criticized  for suggesting racist and provocative images of children that were likely influenced by the searches by a few nefarious users. Content moderation is not an issue that lives just with the algorithm or just with its users, but rather a complex relationship between both parties that the online platforms and their engineers must consider. 

Furthermore, the humanization of a recommendation service altering how third party content is provided may lead to further liability for the online platform. The University of Washington’s own Professor Eric Schnapper is involved in the Gonzalez v. Google case, which examines the question of whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when making algorithmically targeted recommendations of a third-party content provider. Section 230 currently immunizes most online platforms that are considered an “interactive computer service” from being a “publisher or speaker” of third-party information or content. The Gonzales plaintiff is challenging Google on the grounds that YouTube’s algorithmic recommendation system led some users to be recruited into ISIS, and ultimately led to the death of Nohemi Gonzalez in the 2015 terrorist attacks in Paris. After the first days of arguments, the Supreme Court Justices seemed concerned about “creating a world of lawsuits” by attaching liability to recommendation-based services. No matter the result of this lawsuit, the interactive nature of search engine based assistants creates more of a relationship between the user and the search engine. Assessing how content is being provided has been seen in other administrative and legislative contexts such as the SEC researching the gamification of stock trading in 2021 and California restricting the types of content designs on websites intended for children. If Google’s AI LaMDA could pass the famous Turing Test to appear to have sentience (even if it technically does not), would the corresponding tech company be more responsible for the results from a seemingly sentient service or would it create more responsibility on the user’s responses? 

From my perspective, I think it depends on the role that the search engines give their AI-powered assistants. As long as these assistants are just answering questions and providing pertinent and related recommendations without taking demonstrative steps of guiding the conversation, then search engines’ suggestions  may still be protected as harmless recommendations. However, engineers need to continue to be vigilant on how user interaction in the macroenvironment may influence AI and its underlying algorithm, as seen with Microsoft’s Twitter-chatbot Tay or with  some of Bing’s controversial suggestions. The queries sent with covert nefariousness should be closely monitored as to not influence the experience of the general user. AI can be an incredible tool, but online search platforms should be cognizant of the rising issues of how to properly moderate content and how to display content to its users.