Software’s Fourth Pricing Revolution Emerging for AI Agents

Photo by Christina Morillo on Pexels.com

By: Joyce Jia

The Fourth Pricing Revolution: Outcome-Based Pricing

In August 2024, customer experience (CX) software Zendesk made a stunning announcement: customers would only pay for issues resolved “from start to finish” by its AI agent. No resolution or human escalations? No Charge. Meanwhile, Zendesk’s competitor Sierra, a conversational AI startup, introduced its own outcome-based pricing tied to metrics like resolved support conversations or successful upsells. 

Zendesk claims to be the first in CX industry to adopt outcome-based pricing powered by AI, but it seems to have already fallen behind: Intercom has launched a similar model in 2023 for its “Fin” AI Chatbot, charging enterprise customers $0.99 only when the bot successfully resolves an end-user query. 

The wave to outcome-based pricing represents the fourth major pricing revolution in software. The first revolution began in the 1980s-1990s with seat-based licenses for shrink-wrapped boxes, where customers paid a one-time flat fee for software ownership without automatic version upgrades. The second revolution emerged in the 2000s when industry pricing transitioned to SaaS subscriptions, converting software into a recurring operational expense with continuous updates. The third revolution came in the 2010s with consumption-based cloud pricing, tying costs directly to actual resource usage. The fourth and current revolution is outcome-based pricing, where customers are charged only when measurable value is delivered, rather than for licenses purchased or resources consumed.

In fact, the shift to outcome-based pricing extends far beyond AI customer support, spanning AI-driven sectors from CRM platform like Salesforce to AI legal tech (EvenUp), fintech firm (Chargeflow), fraud prevention (Riskified and iDenfy) and Healthcare AI Agent. These companies are experimenting with pure outcome-based pricing or hybrid models that combine traditional flat fees, usage-based charges with outcome-based components. Recent tech industry analysis shows seat-based pricing for AI products dropped from 21% to 15% of companies in just one year, while hybrid pricing significantly increased from 27% to 41%.

Historical Precedent from Legal Practice: Contingency Fees

Outcome-based contracting isn’t a novel concept. It has been growing for over a decade in other industries. In the legal field, professionals have long worked with its equivalent in the form of contingency fees. Since the 19th century, lawyers in the United States have been compensated based on results: earning a percentage of the recovery only if they successfully settle or win a case. However, this model has been accompanied by strict guardrails. Under ABA Model Rule 1.5(c), contingency fee agreements must be in writing and clearly explain both the qualifying outcome and calculation method. Additionally, contingency arrangements are prohibited in certain matters, such as criminal defense and domestic relations cases. 

Beyond professional ethical concerns, the key principle is straightforward: when compensation hinges on outcomes, the law demands heightened transparency and well-defined terms. AI vendors adopting outcome-based pricing should expect similar guardrails to develop, ensuring both contract enforceability and customer trust. This requirement stems from traditional contract law, not AI-specific regulation. 

The Critical Legal Question: Defining “Outcome”

One of the biggest challenges in outcome-based pricing is contract clarity. Contract law requires essential terms to be clearly defined. If such terms are vague or cannot be determined as reasonably certain, the agreements may be unenforceable. Applying it to AI agents, one critical question arises: How do you precisely and fairly define a “successful” outcome?

The answer can be perplexing. Depending on the nature of the AI product, multiple layers can contribute to “outcome” delivery, such as internal infrastructure or workflows, external market conditions, marketing efforts, or third-party dependencies. These complex factors make it hard to judge clear ownership of results or to establish precise payment triggers. This is especially true when “outcome” is delivered over an extended period.

The venture capital firm, Andreessen Horowitz, recently conducted a survey highlighting the issue: 47% of enterprise buyers struggle to define measurable outcomes, 25% find it difficult to agree on how value should be attributed to an AI tool or model, and another 24% note that outcomes often depend on factors outside the AI vendor’s control.

These are not just operational challenges. They raise a real legal question about whether the contract terms are enforceable under the law. 

Consider these scenarios that illustrate the difficulty:

  • What happens if the outcome is only partially achieved?
  • What if the AI agent resolves the issue but too slowly, leaving the user frustrated despite a technically successful outcome?
  • What if an AI chatbot closes a conversation successfully, but the customer returns later with a complaint?
  • What if a user ends the chat session without explicitly confirming whether the issue was resolved?

As Kyle Poyar, a SaaS pricing expert and author of an influential newsletter on pricing strategy and product-led growth, observed: 

“Most products are just not built in a way that they own the outcome from beginning to end and can prove the incrementality to customers. I think true success-based pricing will remain rare. I do think people will tap into the concept of success-based pricing to market their products. They’ll be calling themselves ‘success based’ but really charge based on the amount of work that’s completed by a combination of AI and software.”

Legal Implication for the Future

Just as the rapid growth of AI agents themselves, outcome-based AI pricing is evolving at breakneck speed. The blossoming of this new pricing model presents a challenge for contract implementation and requires existing contract terms to adapt once again to accommodate new forms of value creation and innovative business models.

The scenarios above are just a few examples, but they underscore the importance of attorneys working closely with engineering and business teams to meticulously identify potential conflicts and articulate key contract terms grounded in clear metrics and KPIs that objectively define successful outcomes. 

“Outcome” could mean different things to different parties, and its definitional ambiguity could create misaligned incentives. Buyers may underreport value while vendors might game metrics to overstate performance. These dynamics will inevitably lead to disputes. AI vendors that have adopted or plan to adopt outcome-based pricing must develop robust frameworks addressing contract definiteness and attribution standards before dispute rises. Without these safeguards, we can likely see a wave of conflicts over vague terms, unenforceable agreements, and unmet expectations on both sides as AI agents surge. 

From Bad to Worse: How a Smartphone App Went from Cargo Inspections to Asylum Seekers to Self-Deportation

By: Claire Kenneally

CBP One: From Cattle to Children 

CBP One, launched in October 2022, was a mobile app designed to expedite cargo inspections and entry into the United States for commercial truck drivers, aircraft operators, bus operators, and seaplane pilots transporting perishable goods. It allowed those entities to schedule an inspection with Customs and Border Patrol (CBP). 

In 2023, the Biden administration expanded the app’s use to asylum seekers. CBP One was designated as the only acceptable way to apply for asylum at the border. To schedule an appointment, asylum seekers had to be in northern or central Mexico. After creating an account, they could request an appointment. Appointments were distributed at random the following day, a practice criticized for “creating a lottery system based on chance.

Asylum is legal protection for individuals fleeing persecution based on their religion, race, political opinion, nationality, or participation in a social group. Asylum is recognized under international law and U.S. law. If granted, the asylee may live and work in the United States indefinitely. Asylum seekers apply either at a port of entry along the border, or after entering the U.S. 

Texas National Guard on Rio Grande, Preventing Asylum Seekers from Entering by Lauren Villagram

The Human Cost of Faulty Technology

In researching this article, there were too many stories to count of CBP One’s dysfunction leaving migrants—already months into perilous and exhausting journeys—unable to claim their legal right to apply for asylum.

CBP One was plagued with regular glitches, errors, and crashes that would never be adequately resolved. The app regularly refused to accept required photo uploads, miscalibrated geolocations, and froze– causing the user to miss that day’s available appointments.

Many described the lack of appointments once they created accounts. Migrants in Matamoros were reported waiting up to six months in shelters, rented rooms, or on the street. Asylum seekers often travel by foot to the Mexican-American border and arrive with little more than the clothes on their backs. Most could not afford to pay for months in a hotel, and many experienced racism at the hands of Mexican nationals who refused to employ them. Waiting weeks, let alone months, was unsustainable. Read more about the wait times and the human impact here and here

Other stories pointed out the inaccessibility of relying on a Smartphone app to secure an appointment with CBP. To use it “people need[ed] a compatible mobile device. . . a strong internet connection, resources to pay for data, electricity to charge their devices, tech literacy, and other conditions that place the most vulnerable migrants at a disadvantage.” 

Those with older phones often could not comply with the required photo-upload, and were told to purchase a new phone if they wanted a shot at an appointment. This unexpected financial burden often proved impossible. 

For those with limited technological literacy (the ability to use and understand technology), the 21-step registration system operated as a barrier to entry. Similarly, people with intellectual or physical disabilities were often unable to use the app. 

A Place to Charge a Cell Phone [Was] a High Priority for Migrants Relying on Their Phone to Access the App by Alicia Fernández

The app was only available in English, Spanish, and Haitian Creole and plagued by racist facial-recognition software whereas it failed to register the faces of dark-skinned or Black applicants.

A Pressure Cooker at the Border 

The CBP One bottleneck pushed Mexican border towns to their breaking points- resulting in exceedingly dangerous and unhealthy conditions for migrants waiting for appointments. Doctors Without Borders reported a 70% increase in cases of sexual violence against migrants in one border town last year. 

Centro, a Mexican asylum seeker, waited at a shelter for nine months to receive an appointment. One of her three children has asthma and had to sleep on the floor. “He has been hospitalized three times because of the cold,” she says.

Shelters quickly filled to capacity. Migrants were then forced to sleep on the streets. There, they were subject to extortion, harassment, and assault at the hands of cartel members and military police. They routinely had no access to running water, electricity, hot food, or protection from the elements. 

Mexican Migrants at the Juventud 2000 Shelter in Tijuana, Waiting for a CBP One Appointment  by Aimee Melo

One Harmful App Replaced by Another

On his inauguration day in January 2025, Donald Trump cancelled that afternoon’s CPB appointments and ended asylum seeker’s use of the app. He also revoked the legal status (“parole”) of the 985,000 migrants who successfully entered the US after obtaining CBP One appointments. 

Later that spring, the Trump administration’s rollout of a new app, CBP Home, offered no relief for the migrants at the border. Instead, the app greets asylum seekers currently in the U.S. by explaining the benefits of self-deportation. 

Self-deportation through CBP Home alleges to provide immigrants with a free flight, a $1000 stipend, and the ability to reenter the United States “legally” in the future.  To pay for these promises, the Trump administration moved funds earmarked for refugee resettlement to pay for the flights and stipends.

But the law does not recognize this self-deportation scheme. The Marshall Project notes that “migrants who leave through  [Trump’s self-deportation] process may unwittingly trigger reentry barriers— a possibility that self-deportation posters and marketing materials do not mention.” 

The $1000 stipend also has no legal backing, as there is no law authorizing payments to undocumented immigrants— meaning that there are no legal repercussions if the administration refuses to send the stipend to someone once they have left the country.

And where in this new app does an asylum seeker in Mexico book an appointment with CBP to request asylum? Nowhere. That process has simply been erased.

Colombian migrant Margelis Tinoco, 48, cries after her CBP One appointment was canceled at the Paso del Norte international bridge in Ciudad Juarez, Mexico, on the border with the U.S., Monday, Jan. 20, 2025, the inauguration day of U.S. President Donald Trump by Christian Chavez

CBP One Data: Now Used to Fuel Deportations 

When CBP One was active, asylum seekers were required to input a variety of information about themselves to create an account. There was no way to “opt out” of sharing the information the government demanded. 

Now, using data obtained from those original profiles, DHS has begun stalking and harassing asylum seekers currently in the United States, urging them to “self-deport” with the new CBP Home app. This raises frightening ethical questions about data privacy, government surveillance, and how the information we provide can be weaponized against us. 

Screenshot of the Department of Homeland Security’s webpage on CBP Home, taken by the author on November 12, 2025.

A Bleak New Reality 

As of November 2025, the Trump administration has shut down asylum-seeking at the U.S.-Mexico border, trapped migrants in Mexico, and left those in the U.S. fearing their own data will be used to deport them. What began as a flawed app has escalated into something more sinister and predatory, and requires all of us to pay attention and to advocate for change

#AsylumCrisis #DigitalBorders #WJLTA

The AI-Powered Smart Home: From Convenience to Privacy Trap

By: Francis Yoon

The “smart home” revolution promised a seamless life: a programmable coffee maker, lights that dim automatically, and a thermostat that learns your comfort level. These items that connect and exchange data with other devices and the cloud are collectively known as the Internet of Things (IoT). That was the old “smart.” The Modern AI-powered smart home is now an AI-enabled data-centric habitat; a pervasive ecosystem of sensors, microphones, and cameras whose primary function is not just automation, but data extraction.

Consider this: A voice assistant records you ordering medication late at night; a smart thermostat notes a sudden, prolonged drop in energy use; a smart watch tracks erratic sleep patterns. Separately, these are minor details, but when AI algorithms combine them, they can infer sensitive data (a new chronic illness, a major life event, or a precise work schedule). The potential for this detailed inference is highlighted by privacy advocates who note that even smart meter energy data reveals intimate details about home habits, like showering and sleeping.

This inferred data is the real trap. It is highly personal and potentially discriminatory if used by insurers or targeted advertisers, all while being entirely invisible to the homeowner.

The core danger of modern smart homes is not the collection of a voice command, but the AI-powered inference that follows.

The Danger of Data Inference and the Black Box

This process of data collection is housed within a legal “Black Box”: AI systems that make highly sensitive decisions about individuals without revealing the underlying logic.

Manufacturers claim the AI models and algorithms that create these inferences are protected as proprietary trade secrets. This directly conflicts with a user’s right to access information about the logic, a core tenet of modern data protection law regarding how and why the AI made a certain decision or inference about them. This legal conflict between transparency and corporate intellectual property is the subject of intense debate.

Furthermore, your home data is shared across a fragmented ecosystem that includes: the device maker, the voice assistant platform (e.g., Amazon, Google), and third-party app developers. When a data breach occurs, or a harmful inference is made, the liability for any resulting damage is so fractured that no single entity takes responsibility, leaving the consumer without recourse. This lack of clear accountability is a major flaw in current AI and IoT legal frameworks.

The stakes are real. The Federal Trade Commission (FTC) took action against Amazon for violating the Children’s Online Privacy Protection Act (COPPA) by illegally retaining children’s voice recordings to train its AI algorithm, even after parents requested deletion. This resulted in a $25 million settlement and a prohibition on using the unlawfully retained data to train its algorithms, further showing how data maximalism (collecting and keeping everything) can be prioritized over legal and ethical privacy obligations.

Privacy-by-Design: Aligning Ethics with IP Strategy

The legal landscape is struggling to keep pace, relying on outdated concepts like “Consent,” which is meaningless when buried in a 5,000-word Terms of Service for a $50 smart plug. Consumer reports confirm that pervasive data collection is a widespread concern that requires proactive consumer steps.

The solution should be to shift the burden from the consumer to the manufacturer by mandating Privacy-by-Design (PbD). This concept, already explicitly required by the EU’s General Data Protection Regulation (GDPR) in Article 25, demands that privacy be the default setting, built into the technology, ensuring “by default, only personal data which are necessary for each specific purpose… are processed,” in regards to the amount of data collected and the extent of their processing.

To make this framework actionable and commercially viable, it should be interwoven with Intellectual Property (IP) strategy.

The technical mandate for data minimization is to use Edge AI/Local Processing––meaning raw, sensitive data must be processed on the device itself, not in the cloud. Only necessary, protected data should be transmitted. This technical shift should be incentivized by an IP Strategy that rewards patents protecting Privacy-Enhancing Technologies (PETs), such as techniques that allow AI models to be trained across many devices without ever moving the user’s raw data (federated learning), or methods that obscure individual data points with statistical noise (differential privacy).

For transparency and auditability, manufacturers should be required to provide Granular Control & Logs (simple, mandatory interfaces showing what data is being collected and why, with logs that can be easily audited by regulators). The corresponding IP Strategy should require mandatory disclosure by conditioning the granting of IP protection for AI models on a partial, audited disclosure of their function, thereby eliminating the “Black Box” defense against regulatory inquiry. New laws are making these transparency measures, including machine-readable labeling and comprehensive logging, mandatory for certain high-risk AI systems.

Furthermore, the security mandate should require End-to-End Encryption (E2EE)––a security method that ensures only the communicating parties can read a message––for all data, along with a guaranteed lifecycle for security updates and patches for every device sold. This should be backed by a product liability shift in law that treats a product that failed to provide security updates as a “defective product,” creating a powerful legal incentive for manufacturers to maintain their devices. The need for this is supported by official guidance encouraging manufacturers to adopt a security by design and default mindset.

A Call for Fiduciary Duty and Mandatory Standards

For AI-powered smart homes to be a benefit, not a threat, the law should evolve beyond the current model of consumer consent, which has proven meaningless when privacy obligations are buried in massive Terms of Service agreements. The EU AI Act, for instance, is already moving toward a risk-based legal framework by listing prohibited practices like cognitive behavioral manipulation and social scoring, which are highly relevant to pervasive smart home AI. To this same end, we should implement two major safeguards.

Legislation should introduce minimum technical security and privacy standards for all smart devices before they can be sold (a digital equivalent of safety standards for electrical wiring). The default setting on a new smart device should be the most private one, not the one that maximizes data collection.

Additionally, smart home companies should be held to a fiduciary duty of care toward the users of their products. This legal concept, typically applied to doctors or financial advisors, would require them to place the user’s interests and loyalty above the company’s financial interests in matters concerning data and security. This would force companies to legally act in the best interest of the user, regardless of what a user “consents” to in a convoluted contract. This single shift, supported by seminal legal scholarship, would fundamentally alter the incentives, forcing companies to design for privacy, as their primary legal duty would be to protect the user’s data, not to maximize its commercial value.

Overall, the battle for privacy is increasingly fought on the digital ground of our own homes. The AI-powered smart home doesn’t just automate our lives; it digitizes our intimacy. It is time to enforce a technical and legal framework that ensures innovation serves our well-being, not just corporate profit. The architecture of a truly smart home must start with privacy at its foundation.

#smart-home #privacy-trap #AI-governance

E-Lending Challenges and Libraries’ Mission to Ensure Information Access for All

By: Anusha Seyed Nasrulai

Library services have transformed from being primarily administered in the physical library space to providing library card holders with access to a broad range of digital materials, including ebooks, audiobooks, research, music, film, and more. When digital materials first entered the market, they posed great opportunities to increase the availability and accessibility of library collections. Libraries have adjusted their acquisitions and curation efforts to accommodate an increased demand for digital materials. At the same time, publishers and vendors have repackaged their products to drive profits in response to the demand by raising ebook costs to exorbitant rates. Libraries are “typically required to pay 3–4 times the consumer price for an ebook or audiobook license of a popular title.” Also, many publishers have replaced perpetual licenses with time limited licenses. Publishers further control the market by restricting “how many copies libraries can have, who they can lend to, and how long they (and their patrons) can keep the books.” This has led to library budgets being consumed by licensing costs.

The e-lending marketplace presents multiple challenges to libraries’ longstanding commitment to ensure access to information for all. Digital materials are many patrons’ primary method of accessing information. For example, digital formats are essential resources to patrons with “vision impairment, dyslexia, and other physical or learning needs.”

Libraries are at the whim of the power wielded by vendors controlling access to vital digital materials. About five companies control publishing and dominate the industry for licensing digital materials to libraries. Some companies have business enterprises beyond academic information, including the use and sale of personal and financial information. Thomas Reuters and RELX Group (parent companies of Westlaw and LexisNexis) not only dominate the legal research market, but they also own some of the largest news and academic databases and are data brokers that sell to private entities and law enforcement agencies. Sarah Lamdan, former CUNY law librarian and professor, now ALA director, described the digital information market landscape as a monopoly of information markets, which raises significant ethical and privacy concerns.

Libraries’ Respond to Market Shifts 

The rest of this article examines the implications for the market shift to digital materials for libraries and their patrons, focusing on ownership rights, open source projects, and patron privacy. In response to vendors’ overwhelming control of the digital information marketplace, libraries and researchers are developing solutions to ensure information access for all.

Ownership Rights

Libraries hold ownership rights and control lending access over physical books by the right to first sale. The “first sale” doctrine (17 U.S.C. § 109(a)) “gives the owners of copyrighted works the rights to sell, lend, or share their copies without having to obtain permission or pay fees.” However, this ownership doctrine does not control digital transmissions— including ebook acquisitions. Publishers create license agreements in partnership with vendors, who then license them to libraries. Margaret Chon, Law Professor at Seattle University, argues that high prices and restrictive lending practices undermine the special position libraries have historically held in the copyright system as institutions protecting and facilitating public access to copyrighted works.

Without copyright reform, libraries are often at the behest of vendors’ licensing models. In response, libraries have developed comprehensive strategies to negotiate with vendor providers and select vendors that align with their mission. Still, “the contract-law focused world of copyright for digital content is much more heavily weighted to the benefit of publishers and to the greatest extent possible.” Therefore, libraries have sought legal reforms as one of the solutions to address the modern digital information marketplace. 

ReadersFirst is an organization of almost 300 libraries dedicated to libraries maintaining open and free access to ebooks as collections are increasingly digitized. ReadersFirst advocates for ebook legislation to prevent content restrictions, prohibitively high prices for licenses, and using licenses to excise important copyright law, such as Fair Use. This past summer, Connecticut passed an ebook bill and other states have introduced similar legislation. This bill will be carefully watched after similar legislation in Maryland and New York have been undone by copyright challenges. 

Open Source Projects

During the COVID-19 pandemic, Internet Archive launched the National Emergency Library (NEL). NEL was a continuation of a previous online project where scans of physical library books were “checked out” to people as though they were physical books. In Hachette v. Internet Archive, publishers successfully challenged NEL’s temporary lifting of the one-person-limit on lending. Though this case did not involve a traditional library, it does call into question whether controlled digital lending practices by libraries are vulnerable. 

To protect library projects that expand access to digital materials, new industry standards are being proposed. Controlled digital lending (CDL) protections allow libraries to lend, preserve, and archive digital materials. Currently, a new NISO consensus framework is being developed to support CDL in libraries, with the goal of expanding “understanding of CDL as a natural extension of existing rights held and practices undertaken by libraries for content they legally hold.”

The ability to curate and share open source resources further libraries’ goal to ensure information access for all. An important example of library open source projects are research guides. Research guides are collections of high quality and relevant resources on a given topic from books. Resources included articles, books, media, databases, special collections, exhibits, and programs. Kara Phillips, director of the Seattle University Law Library, stated that research guides “respond to important issues so that patrons can find reliable, authoritative information… [to] support democracy, rule of law, and the legal system.” 

Patron Privacy

As vendors adapt to the competitive digital information marketplace, the change in business models has increased their appetite for patron data. As Roxanne Shirazi, a research librarian at CUNY, puts it, “[a]s lenders, library vendors do not end their relationships with libraries when they complete a sale. Instead, as streaming content providers, vendors become embedded in libraries. They are able to follow library patrons’ research activities, storing data about how people are using their services.”

There are only a handful of states that protect readers’ data outside of libraries. For example, the California’s Reader Privacy Act safeguards readers’ data when accessing physical books or ebooks. Therefore, ensuring patron privacy and holding vendors accountable to ALA privacy standards are central to libraries’ mission.

The Path Forward for Libraries

Librarians and other stakeholders are organizing to address the profound problems that have arisen from changes in the e-lending market. In providing guidance regarding digital access, the American Library Association states, “[i]n order to have a functional democracy, we must have informed citizens. Libraries are an essential part of the national information infrastructure, providing people with access and opportunities for participation in the digital environment, especially those who might otherwise be excluded.”

Has the law of products liability kept up with current military contractor practice?

Photo by Art Guzman on Pexels.com

By: Nicholas Skubisz-Gonzalez

From air transport to nuclear energy, military contractors in the United States have famously benefitted from the expansive government funding necessary to develop new technology and implement it at scale. However, when issues come up with these projects, who pays the human costs associated with failure by contractors?

The answer, since 1988, has largely been that liability should never rest with the contractors that created the equipment at issue, so long as they satisfy all three elements of the Government Contractor Defense set out in Boyle v. United Technologies Corp. This case involved the death of a Marine Corps pilot due to an escape hatch defect, which resulted in a products liability case brought by his family against the company that created said escape hatch. The District Court initially ruled in favor of Boyle under state tort law, which was reversed on appeal to the Fourth Circuit, leading to the Supreme Court ruling that the manufacturer was immune from liability for this incident since it had simply built the helicopter (including the escape hatch) according to government specifications. The test created in Boyle requires that a contractor need only show that (1) the United States agreed to reasonably precise specifications, (2) the equipment satisfied those specifications, and (3) the supplier warned the United States about any dangers in using the equipment that they knew of and which the United States did not. This defense gives manufacturers the benefit of resolving cases before they are forced to go through costly litigation or risk discovery.

Under the defense set out in Boyle, which has largely been expanded in its applicability in the decades since, and only rarely limited, an increasingly large portion of government contractors have gained immunity from products liability claims by third parties. This most importantly includes blocking suits by the military servicemembers they are meant to benefit.

How does the Government Contractor Defense apply in modern defense contracts?

One of the major military contracts that poses a risk of possibly requiring this defense is the Army’s Integrated Visual Augmentation System (IVAS), a helmet incorporating various elements of virtual and mixed reality to enhance soldier perception on the battlefield. This project was initially awarded to Microsoft in 2018 with the intent of creating prototypes for testing before full production began. That said, issues have arisen over the years due to a failure to ensure user acceptance among military personnel, i.e. how many soldiers actually approve of their future equipment. This issue puts the $21.88 billion contract at risk, according to a 2022 Department of Defense audit by the Inspector General. The report states that the product description lacks any measurement for user acceptance, despite the fact that the Army’s sole measurement for system acceptability is user acceptance.

If relying on the elements in Boyle, this project could pose a significant risk to manufacturers. With this technology, the manufacturer has been provided with limited specifications and an inability to satisfy them due to a lack of adequate user satisfaction metrics. User satisfaction metrics are testing requirements for system effectiveness when used by the intended users, in this case soldier satisfaction with IVAS systems. Barring portions of the contract which may not be publicly available, the defense might not be applicable here, which could explain the recent history of the contract. Despite its value, the Army handed over control of the IVAS contract to Anduril Industries via a contract novation signed off on April 10, 2025. A contract novation is the legal process of replacing one party in a contract with another, shifting both the rights and responsibilities specified in the contract onto the new party with the consent of all involved.

A unique point to make on this contract changeover is how the scope of responsibilities have changed for each contractor given the modern landscape. The US military has in recent years reported a troublesome “substantial consolidation” of military contractors, resulting in their goal of diversifying reliable sources of supply to involve more businesses. When this contract was initially awarded to Microsoft, the goal was to create effective helmets with Mixed Reality capabilities that expanded the soldier’s awareness on the battlefield. While the hardware hasn’t resulted in any serious public concerns for the Army, this initial project did result in several software problems that warranted recompeting the contract, which Microsoft ultimately lost to Anduril. This has changed since the contract novation, since Microsoft now only provides the hardware they already developed, and Anduril is only responsible for working on the software and integration component of the contract, getting their EagleEye software to operate on the Microsoft hardware. Anduril has conducted several tests to ensure compatibility with the existing Microsoft created IVAS 1.2 design, but if this software-hardware connection should fail in any way in the field, a valid question remains on who would be responsible.

Who’s holding the liability hot potato?

Microsoft created the nausea-inducing headset under the original contract, Anduril focused on enhancing the software for user comfort and capabilities and was handed responsibility for the full contract, and in both instances the technology seems to be on the cutting edge of Mixed Reality capabilities. Was it even possible to articulate any reasonably precise specifications for either company?

Common practice might suggest that any tort cases involving this equipment might base liability off of the party responsible for the portion of the hardware or software at issue, yet the Government Contractor Defense presents potentially significant limits for litigants. One of the main limits at issue is the ability to limit the scope of discovery due to confidentiality concerns and the need for testimony by government personnel. This creates a trend of cases where plaintiffs lack meaningful information on the equipment that caused their injuries, preventing them from identifying the root cause, or the real defendant responsible. While the types of cases covered by the defense created in Boyle have grown over the decades since, the current doctrine creates a legal limbo for contracts on developing technologies that have, by their nature, extremely imprecise specifications and multiple contractors taking on full responsibility for different phases of development for the same equipment.Should the defense be expanded to accommodate current practices, as it historically has been post-Boyle? Should it be restricted to assume that liability exists unless the military specifically approved the conduct at issue, as Justice Kagan suggested in a case currently before the Supreme Court? The ideal solution going forward largely depends on an individual’s own balancing on the importance of innovation versus accountability.

#WJLTA #Military-Contractor #Products-Liability