
By: Alexander Okun
The financial technology (“Fintech”) industry has created a tidal wave of novel opportunities to borrow, save, and invest; one of these is “Peer-to-Peer” (“P2P”) lending. P2P platforms play as intermediaries between borrowers (usually individuals or small businesses) and lenders. Unlike traditional personal loans, the lenders are individuals who only provide the funds. The intermediary platform then handles the application process, debt collection, repayment. This new model created hope that credit would become more accessible for underserved communities and enable investors to reap greater rewards, while still serving socially conscious goals. The dream of greater inclusivity was rooted in the expectation that credit risk assessment would be fairer and lead to more reasonable loan terms offered to applicants. Unfortunately, this dream never materialized.
NEW MODELS EMERGE
The first P2P platforms in the US, Prosper and Lending Club, emerged in 2006. Both companies promoted new models for evaluating loan applicants that were more comprehensive than those used by banks. This became a key competitive advantage after the 2008 financial crisis, as traditional banks imposed more stringent lending standards for both individuals and small businesses. The P2P platforms’ risk assessment models rely on a variety of alternative data sources, but many include so-called “soft” information like social media, the age of the email address used to open the account, and even the amount of time spent on the platform’s website. Most platforms use this information alongside credit scores when screening applicants, offering access to borrowers whose traditional credit indicators would be rejected by banks. However, expanding the scope of access did not prevent discrimination in the lending process. In fact, an early study found that lenders on Prosper’s platform were offering substantially higher interest rates to Black borrowers than to their White counterparts. It also found that borrower profiles with Black photographs were approximately 25-35% less likely to receive funding than White profiles with similar objective credit indicators. Due to this and general data privacy concerns, many P2P platforms have chosen to make borrower profiles anonymous, even omitting their cities of residence. However, the increasing use of artificial intelligence in screening appears to simulate human biases contributing to the general fear of “algorithmic discrimination” in the marketplace. Seeing as these algorithms are one of the key selling points of P2P platforms, it is unclear how they will assuage public concerns with discrimination going forward.
BANKS GAIN INTEREST
Another key reason driving the hope for financial inclusion was the expectation that P2P platforms would circumvent the dominance of institutional lenders and investors. However, the data show that this was hardly the case; in the United States, P2P platforms are unable to “originate” loans – that is, they cannot make loans directly to the borrower. What usually occurs instead is a partner bank “originates” the loan and then sells the loan to the P2P platform for the remainder of its duration. Although this has proven lucrative for banks and expedient for the lending platforms, its legality is unclear. Some of the largest platforms have thus chosen to formally charter as banks (or purchase a preexisting one). As a result, the largest platforms have begun offering their own loans and other common banking services such as savings accounts, credit cards, and even financial consulting. Even for those platforms that have remained true to the original P2P model, the share of lenders who are banks or institutional investors has grown substantially. What was originally termed “P2P” lending is now but one part of the larger “Marketplace Lending” (“MPL”) sector. Data in the last decade show that this trend has led to more selective lending like traditional banks, even though they continue to use the platforms’ “fairer” algorithms to assess applicants. Some researchers now see the few remaining individual lenders on P2P platforms as passive investors, while institutional lenders act as gatekeepers who evaluate borrowers and set terms. Users are forced to choose between platforms that hide their conventional practices behind the guise of innovation and companies that have morphed into actual banks.
GOING FORWARD
Although P2P lending has failed thus far to deliver inclusivity and equitable lending, there are some public policy changes that could mitigate discriminatory aspects of the lending process. One key issue is that the most advanced AI models are nearly impossible to deconstruct and examine, so it is more difficult to establish that a model’s developers had the intent to discriminate against applicants. The Consumer Financial Protection Bureau (CFPB) has attempted to fill these gaps in our civil rights laws by issuing guidance to lenders regarding their duties under the Fair Credit Reporting Act. However, a more reliable approach would be comprehensive legislation regulating the use of AI to ensure that its discriminatory effects do not spread to other sectors of the economy. The unchecked development of AI models has generated bipartisan concern, and created an opportunity to adopt legislation that better reflects the current state of technology. Until that occurs, we will continue to rely on a patchwork of laws that are ill-suited to the era of AI.