“Age Ain’t Nothing but a Number”: the Difficulties of Age Verification Online

By: Janae Camacho

Over 20 years ago, Xanga was founded as a site for sharing music and book reviews. Looking back, it was one of the earliest social media platforms. Around the time that the internet was first coming into being, the most social media-like interaction involved logging on to Xanga and creating a blog. Today, many young people cannot recall a day when social media and the internet were not around. The internet and social media platforms have become permanent fixtures in our everyday lives; it is almost impossible to find someone not plugged into one of many social media apps, such as Tik Tok, Facebook, and Instagram. While we have all become accustomed to using these apps and their access to our information, the effects of social media on children, and social media companies’ access to children’s personal data, have become an increasing concern of parents, adults, and legislators alike. 

Various groups have made calls to action for more protection of children and their data. These calls to action have become even more apparent as these large companies seek to expand their customer base further, focusing on their most significant base, teens, and even looking to include younger children. Legislators like Rep. Kathy Castor (D-FL) have suggested further protections for children above the age of 13. Rep. Castor introduced the Protecting the Information of our Vulnerable Children and Youth Act (Kids PRIVCY Act) on July 29, 2021. The Act would, among other things, extend the protections to teens 13 through 17, require expressed consent to collect their personal information and ban advertisements directed at children.

While these suggested measures found in the Kids PRIVCY Act may help limit social media’s adverse effects on children, it is unlikely that increasing protections for those over 13 years old will prevent those harms from occurring. The complexity of enforcing such age verifications while maintaining the privacy of the minors presents a significant barrier to implementation.

What is the Children’s Online Privacy Protection Act (COPPA)?

In the 1990s, due to the ease of gathering private data from children, the public pressured Congress to protect children’s data through legislation. In response, Congress passed the Children’s Online Privacy Protection Act. COPPA was written in 1998, at the cusp of the widespread use of the internet surge, to prevent online platforms from collecting and using the personal data of children under the age of 13 for tracking and ad targeting purposes. The rule applies to operators of commercial websites and online services that are directed at children under the age of 13 years old, as well as those operators of general audience websites or online services with “actual knowledge” that they are collecting, using, or disclosing personal information from these children under 13 years old. 

What is the Protecting the Information of our Vulnerable Children and Youth Act (Kids PRIVCY Act)?

The Protecting the Information of our Vulnerable Children and Youth (PRIVCY) Act is a bill that Representative Kathy Castor (D-FL) proposed to update COPPA to meet the increased need for protection amongst a more invasive internet than in the 1990s. The proposed legislation seeks to, among other things, (1) extend the coverage to teens ages 13 to 17; (2) prohibit the collection of personal information from 13 to 15-year-old children without their expressed consent; (3) Change COPPA’s actual knowledge standard to a “constructive knowledge” standard; (4) provides a right of action to the parents of the children; and (5) ban targeted advertising that is directed at children. The bill’s drafters have included many of the critical elements of the UK’s Age Appropriate Design Code within the bill, with the aim that the companies will place the best interest of our youth over profits.

The Difficulties of Age Verification

Many companies have found it challenging to comply with the COPPA age restriction standard due to the lack of adequate, verifiable ways to identify the age of the person creating the account. Companies have used age-verification methods, such as requiring users to input their birth date, which is ultimately ineffective because it is taken at face value and not further verified by the company. With the lack of actual verification, many children under the age of 13 can bypass the age restriction by submitting a birth date that makes them at least 16 years old. According to a study conducted by Irish research center Lero, in 2021, children under the age of 13 could bypass many websites that use birthday-based age screening. The study also found that while stricter rules might help protect minors’ privacy, children were able to bypass the more stringent age verification techniques. 

Although there are many other ways that companies can determine whether users under 13 years old, such as the use of “classifiers” to assist companies in predicting age and requiring proof of suspected young users to verify their age, concern remains on whether increasing the age range subject to children’s privacy protections would further prevent the misuse of minors’ data. With the difficulty of effectively verifying children’s ages, some tech companies, such as Instagram and Meta, have suggested possible remedies for further verification. In a December 8, 2021 committee hearing, Instagram’s CEO, Adam Mosseri, indicated that parents could provide their child’s age through speaking to the phone through the app or voluntarily input the child’s age into their cell phone that could be directly accessed by all apps being used on that cell phone. Additionally, Meta has confirmed that they are looking into working with other tech companies to potentially share information in “privacy-preserving ways” to assist in determining a user’s age. These solutions may bring up other possible privacy concerns, especially if there is the use of voiceprints to verify the identity of the person who is verifying the child’s age. While these proposed remedies to the age verification issue may be the solution, it is unclear how exactly these companies will verify whether the person providing the child’s age is actually a parent.
While further regulation of these companies is needed to protect minors from the influences of social media and behavioral advertising, there is no clear way to implement such protections effectively. As we have seen through the current difficulty in enforcing COPPA age limits, children can continue to circumvent such measures with or without the age limit increase. Furthermore, any attempts to verify the ages of minor users effectively would further delve into their personal information. The privacy concerns, such as third-party sharing of information, personal data security, and the use of biometric data without consent that arise from those measures could outweigh the perceived protection of an increased age requirement. Due to the challenges with age verification, the legislature now has the task of finding a way to protect children and teens from the harms of social media and the internet. In doing so, they must balance the need for accurate, verifiable age-gating with privacy concerns, minimizing the data that these companies collect from children.

Lawmakers Set Their Sights on Restricting Targeted Advertising

By: Laura Ames

Anyone who spends time online has encountered “surveillance advertising.” You enter something into your search engine, and immediately encounter ads for related products on other sites. Targeted advertising shows individual consumers certain ads based on inferences drawn from their interests, demographics, or other characteristics. This notion itself might not seem particularly harmful, but these data are accrued by tracking users’ activities online. Ad tech companies identify the internet-connected devices that consumers use to search, make purchases, use social media, watch videos, and otherwise interact with the digital world. Such companies then compile these data into user profiles, match the profiles with ads, and then place the ads where consumers will view them. In addition to basic privacy concerns, the Consumer Federation of America (CFA) points to the potential for companies to hide personalized pricing from consumers or to promote unhealthy products and perpetuate fraud. Perhaps the largest concern is that the large stores of personal data that these companies maintain put consumers at risk of having their privacy invaded, identity theft, and malicious tracking.   

In response to these concerns, Democratic lawmakers unveiled the Banning Surveillance Advertising Act (BSSA) in an attempt to restrict the practice and under a general consensus that surveillance advertising is a threat to individual users as well as society at large. This move prompted opponents to argue that the BSSA is overly broad and will harm users, small businesses, and large tech companies alike.

What Does the BSSA Do? 

The BSSA is sponsored by Senator Cory Booker and Representatives Jan Schakowsky and Anna Eshoo. The bill bars digital advertisers from targeting their ads to users and also prohibits advertisers from targeting ads based on protected information like race, gender, religion, or other personal data purchased from data brokers. According to Senator Booker, surveillance advertising is “a predatory and invasive practice,” and the resulting hoarding of data not only “abuses privacy, but also drives the spread of misinformation, domestic extremism, racial division, and violence.”

The BSSA is broad, but it does provide several exceptions. Notably, it allows location-based targeting and context advertising, which occurs when companies match ads to the content of a particular site. The bill suggests delegating power to the FCC and state attorneys general to enforce violations. It also allows private citizens to bring civil actions against companies that violate the ban with monetary penalties up to $1,000 for negligent violations and up to $5,000 for “reckless, knowing, willful, or intentional” violations. The BSSA has support from many public organizations and a number of professors and academicians. Among several tech companies supporting the BSSA is the privacy-focused search engine DuckDuckGo. Its CEO, Gabriel Weinberg, opined that targeted ads are “dangerous to society” and pointed to DuckDuckGo as evidence that “you can run a successful and profitable ad-based business without building profiles on people.” 

The BSSA as Part of a Larger Legislative Agenda 

The BSSA is just one bill among a number of pieces of legislation aiming to restrict the power of large tech companies. Lawmakers have grown increasingly focused on bills regulating social media companies since Facebook whistleblower Frances Haugen testified before Congress in 2021. These bills target a wide variety of topics including antitrust, privacy, child protection, misinformation, and cryptocurrency regulation. Most of these bills appear to be rather long shots, however, because although the Biden administration supports tech industry reform, so many other issues are high priorities for it. Despite this hurdle, lawmakers are currently making a concerted push with these tech bills because the legislature’s attention will soon turn to the 2022 midterms. Additionally, Democrats, who have broader support for tech regulations, worry they could lose control of Congress. Senator Amy Klobuchar argued that once fall comes, “it will be very difficult to get things done because everything is about the election.” 

Tech and Marketing Companies Push Back

In general, tech companies tend to argue that targeted advertising benefits consumers and businesses alike. First, companies argue that this method allows users to see ads that are directly relevant to their needs or interests. Experts rebut this theory with the fact that in order to provide these relevant ads, tech companies must collect and store a great deal of data on users, which can put that data at risk of interference by third parties. Companies also argue that this legislation would drastically change their business models. Marketing and global media platform The Drum predicted that the BSSA “could have a massive impact on the ad industry as well as harm small businesses.” The Interactive Advertising Bureau (IAB), which includes over 700 brands, agencies, media firms, and tech companies, issued a statement strongly condemning the BSSA.  IAB CEO David Cohen argued that the BSSA would “effectively eliminate internet advertising… jeopardizing an estimated 17 million jobs primarily at small- and medium-sized businesses.” The IAB and others argue that targeted advertising is a cost-effective way to precisely advertise to particular users. However, the CFA points to evidence that contextual advertising, which is allowed under the BSSA, is more cost-effective for advertisers and provides greater revenue for publishers. 

Likelihood of the BSSA’s Success

In the past several years, there has been growing bipartisan support for bills addressing the increasing power of tech companies. This support would seem to suggest that these pieces of tech legislation have a better chance of advancing than other more controversial legislation. However, even with this broader support, dozens of bills addressing tech industry power have failed recently, leaving America behind a number of other countries in this area. One of the major problems impeding bipartisan progress is that while both parties tend to agree that Congress needs to address the tremendous power that tech companies have, they do not align on the methods the government should use to address the problem. For example, Democrats have called for measures that would compel companies to remove misinformation and other harmful content while Republicans are largely concerned with laws barring companies from censoring or removing content. According to Rebecca Allensworth, a professor at Vanderbilt Law School, the larger issue is that ultimately, “regulation is regulation, so you will have a hard time bringing a lot of Republicans on board for a bill viewed as a heavy-handed aggressive takedown of Big Tech.” Given Congress’ recent track record in moving major pieces of legislation, and powerful opposition from the ad tech industry, the BSSA might be abandoned along with other recent technology legislation.  

What Doesn’t Kill Section 230 Makes it Stronger

By: Marissa Train

Section 230 of the Communications Decency Act, the federal law providing social media platforms with immunity from liability for user generated content, has recently faced objections from politicians on both sides of the aisle. Both parties’ issues stem with the law largely stem from the protection it offers under 230(c), which gives platforms leeway to maintain their own content moderation policies. Democrats largely view those policies as too permissive, causing misinformation to run wild, while Republicans often view the same policies as too restrictive, ‘censoring’ conservative speakers and content.

While many federal proposals to change Section 230 have been introduced, only FOSTA-SESTA, an attempt to stop online sex trafficking, became law. Instead, most of the legislative action has been at the state level, particularly in conservative states

Florida Goes First

In May 2021, Florida Governor Ron DeSantis signed a bill prohibiting social media platforms from suspending political candidates before elections and allowing all users to bring lawsuits against companies if they believe their content moderation is inconsistent. However, when the bill was signed, Eric Goldman, a professor at Santa Clara University Law School, stated that he “see[s] this bill as purely performative, [that] was never designed to be law but simply to send a message to voters.”  

Goldman’s belief that the law would be found unconstitutional was realized in the form of an injunction issued one day before the law came into effect. The Computer and Communications Industry Association (CCIA) and NetChoice, representing Facebook, Youtube, Twitter, and others, had filed suit in Florida … and won. 

In the order issuing a preliminary injunction, Judge Robert Hinkle stated “[t]he legislation now at issue was an effort to rein in social-media providers deemed too large and too liberal. Balancing the exchange of ideas among private speakers is not a legitimate governmental interest.” In short, Judge Hinkle held that the Florida law violates the First Amendment and that it was preempted in large part by Section 230.

Florida appealed the court’s ruling late last year, so now we must wait to see how the Eleventh Circuit rules in the appeal. 

Texas Follows Suit

In March 2021, H.B. 20, also known as the Freedom from Censorship Act, was first introduced to the Texas State Senate. It is widely understood that the bill was a reaction to President Trump’s suspension from every major social media platform after the attack on the U.S. Capitol. Some Texas lawmakers, including Governor Abbott, viewed the suspensions as a direct assault on the sharing of conservative views online. Governor Abbott tweeted such sentiments many times: “Silencing conservative views is un-American, it’s un-Texan, and it’s about to be illegal in Texas.”  

The Texas law essentially prohibits social media platforms with more than 50 million active users from banning users based on political views alone. The law also requires these platforms to create complaint systems for users to appeal removal of their content, and allows Texas residents to file suit against the company if they believe they were wrongfully banned. The bill was enacted in September, and was set to go into effect on December 2. 

The CCIA and NetChoice, the same parties that opposed the Florida law, co-filed a suit against the Freedom from Censorship Act just two weeks after it was enacted. In their complaint, the CCIA and NetChoice allege the law would hamper a social media platform’s ability to stop the spread of hate speech and misinformation, which the groups claim is a violation of the companies’ First Amendment rights.

The complaint states “[a]t a minimum, H.B. 20 would unconstitutionally require platforms like YouTube and Facebook to disseminate, for example, pro-Nazi speech, terrorist propaganda, foreign government disinformation, and medical misinformation.” The complaint details that, “legislators rejected amendments that would explicitly allow platforms to exclude vaccine misinformation, terrorist content, and Holocaust denial.”

A federal district court heard arguments on its constitutionality on November 29, and issued a preliminary injunction preventing all parts of it the plaintiffs challenged from being enforced on December 1. Unlike in the Florida case, the district court did not reach the question of whether H.B. 20 is preempted by Section 230, choosing instead to enjoin the law entirely on the basis of the First Amendment

What’s Next For Section 230?

Section 230 is already facing new challenges from state lawmakers, this time from the left side of the aisle. Democrats in New York state introduced New York S. 7568, a bill that attempts to incentivize platforms like Facebook and YouTube to not amplify certain third-party content. It would make platforms liable for content that leads to ‘imminent lawless action’ or ‘self-harm,’ and for information that is ‘false’ and ‘likely to endanger’ public health when any of that content is promoted by an algorithm. 

While, if passed, this law would easily be found unconstitutional on First Amendment grounds and preempted by Section 230 like the Florida and Texas laws, it signifies the Democrats entering the largely performative arena of state content moderation legislation. 

It seems that what’s next for Section 230 might just be more of the same: performative gestures by state lawmakers in the absence of any further guidance from Congress about a path forward.

NFTs: Coming Soon to a Patent Portfolio Near You?

By: Hannah Avery

At this point the craze surrounding NFTs is far from breaking news. NFTs (“non-fungible tokens”) have been created for everything from the “Disaster Girl” meme to the world’s first tweet. They have been the subject of numerous articles, publications, and blogs, including this blog by the Washington Journal of Law, Technology, and the Arts’ Associate Editor-in-Chief Joanna Mirsch, discussing video game-related NFTs. Despite NFTs’ widespread popularity, early “NFT craze” trends seemed at odds with established American intellectual property rights, with many works being minted as NFTs without the consent of the original creator. At the very least, ownership of NFTs was widely regarded as independent of ownership of the underlying intellectual property rights. But… what if they weren’t?

While the sale of an NFT by itself does not automatically confer the underlying IP rights, the use of self-executing contracts in conjunction with the sale of an NFT can. This is the exact type of transaction that IBM was betting on when it teamed up with IPwe to create a platform for block-chain-enabled IP transactions. The IBM/IPwe platform transfers patent rights by building a smart contract with standardized terms into the token. The patent owner is able to set the terms of that contract, including what information is public and what is not. With this big bet on patent NFTs by IBM, the launch of IPwe’s secure licensing & selling capabilities, and the first sale of a patent and the related patent rights as an NFT in April 2021, many forward-looking patent-holders may be wondering whether they should convert their patent portfolios to NFTs.

Unsurprisingly, the press release for the IBM-IPwe partnership touts a number of benefits of blockchain-based IP transactions including increased transparency, reduced transaction costs, and greatly improved capacity for patent-holders to manage, value, and transfer their IP assets. Additionally, blockchain-based patent transactions could help to prevent future SEP licensing disputes and/or to simplify their resolution. However, there are also a number of potential risks which could dissuade those who would otherwise be early-adopters.

Increased transparency of ownership

Since the introduction of blockchain technology, one of its most praised features has been the unique ability to create an indisputable record of a series of transactions. As applied to patent rights, this feature would allow users to track the ownership of patent NFTs and the transactions associated with patent license NFTs. This tracking mechanism would provide clarity of ownership of the patent rights. According to IPwe’s Chief IP Officer Cheryl Milone Cowles, “distributed network verification . . . provides the confidence of transacting with a clear current title and history.” While such assurance is undoubtedly appealing to potential investors, it may be too good to be true… at least for now. Given the nature of this technology and the current case law governing patent ownership disputes, situations could arise where a legal approach or remedy would be unclear. Some experts have raised concerns including: whether an owner of a patent whose NFT was stolen through a ransomware attack would be able to reestablish ownership through the legal system; whether a court would recognize transfer of a patent via NFT absent a more standard, written assignment; and, if courts do prove willing to recognize such an assignment via an NFT sale, what evidence will be considered sufficient to demonstrate ownership. Such uncertainties could easily lead to unfavorable, or simply unsatisfying,  outcomes for purchasers.

Cost-reduction

Historically, the costs of obtaining, maintaining, and licensing IP have been high. Therefore, the promise of a platform that could lower those costs, as IPwe claims, would be welcome news to many players in the IP space. While the technology behind the platform may be complicated, the theory of potential cost-savings is simple — the smart contracts included in token sales, as discussed above, would replace the current process of IP sales and licensing that is laden with attorneys fees, paperwork, and onerous contract negotiations. Such savings would be embraced by any organization accustomed to shouldering the burden, but it has the potential to be game-changing for small and medium-sized companies who may have previously been reluctant to engage in IP-related transactions because of current prohibitive costs.

Portfolio management

While tokenizing IP assets may interfere with a company’s existing portfolio management strategies, such disruption of existing strategies could also reap rewards for the first movers in this space. For example, the potential for easy resale of an IP asset could increase the value of that asset at the time of the initial sale. Or, standard essential patent holders could also utilize smart contracts to avoid expensive litigation while capturing licensing fees with each sale or resale. The possibilities are endless.

* * *

All in all, tokenizing patent assets may reduce the costs associated with patent licensing and streamline portfolio reporting. However, early adopters of this approach will have to navigate uncertain legal areas regarding the ownership of their tokenized IP rights. In our information-based economy, adopters could be risking some of their most valuable assets in their effort to become an industry leader. Is it worth the risk?

Balancing Labor Law and Client Confidentiality in the Social Media Age

By: Kimberly Shely

It is common knowledge that lawyers have a professional duty to reasonably ensure their employees abide by the Rules of Professional Conduct (RPCs). This ethical duty includes training their employees on how to maintain client confidences. Lawyers need to address proper social media etiquette with their nonlawyer employees to ensure that they understand that “confidential client information” cannot be discussed or shared on their personal social media platforms. However, lawyers must balance these ethical obligations with employees’ legal rights under labor laws. The National Labor Relations Act (NLRA) provides employees protection in engaging in concerted activity to better their working environment, and the National Labor Relations Board (NLRB) has extended this to include social media posts. Lawyers can balance protecting client confidences with their employees’ rights under the NLRA.

Lawyer’s Duties Under the ABA Model Rules of Professional Conduct

The applicable RPCs are Rule 5.3 Responsibilities Regarding Nonlawyer Assistance and Rule 1.6 Confidentiality of Information. Rule 5.3(b) explains that “a lawyer having direct supervisory authority over the nonlawyer shall make reasonable efforts to ensure that the person’s conduct is compatible with the professional obligations of the lawyer.” In particular, Comment 2 clarifies that it is the lawyer’s responsibility to instruct and supervise the nonlawyer employees, “particularly regarding the obligation not to disclose the information relating to representation of the client . . . .” This is consistent with a lawyer’s duties under Rule 1.6(c) to make reasonable efforts to prevent the “unauthorized disclosure of . . . information relating to the representation of a client.” Comment 18 in Rule 1.6 again emphasizes it is the lawyer’s duty to competently safeguard client information, including by those the lawyer has direct supervisory authority over. The NLRA is not an exception to this Rule.

The NLRA Applies to Your Employees, Even Without a Union

The NLRA was created to protect employees in private-sector workplaces and allow them to seek better working conditions without risking retaliation. A key feature of the NLRA is Section 7 in that “[e]mployees shall have the right . . . to engage in other concerted activities for the purpose of collective bargaining or other mutual aid or protection . . . .” A common misunderstanding of the NLRA is the assumption that it only applies to employees that are already in a union or attempting to join a union. However, “concerted activity” protections in Section 7 of the NLRA extend to any employee that meets the statutory definition, regardless of their union status. 

The statutory definitions of “employee” and “employer” make it clear that the NLRA extends to nonlawyer employees in a private law firm. NLRA Section 2(3) defines “employee” to include essentially any employee of a particular employer, except for agricultural workers, those working for a family member in domestic services, independent contractors, or a NLRA defined “supervisor.” This broad definition of “employee” would encompass nonlawyer employees described in Comment 2 of Rule 5.3 of the RPCs.

Private law firms, and by extension the lawyers within, would be considered “employers” under the NLRA. The NLRA Section 2(2) definition of “employer” is similarly broad and “includes any person acting as an agent of an employer, directly or indirectly, but shall not include the United States or any wholly owned Government corporation . . . or any labor organization . . . .” Nonlawyer employees and lawyers in private law firms meet the NLRA definitions, and therefore Section 7 of the NLRA applies to private law firms. 

Your Employees’ Social Media Rights Under the NLRA

Lawyers must be aware that Section 8(a)(1) of the NLRA makes it “an unfair labor practice for an employer to interfere with, restrain, or coerce employees in the exercise of rights guaranteed in section 7.” Again, these Section 7 rights extend to employees who are not represented by a union. An employer can potentially commit an Unfair Labor Practice even without a unionized workplace. The “concerted activity” protections in Section 7 give employees “the right to act with coworkers to address work-related issues in many ways.” Some of the most common examples of “concerted activity” are talking with coworkers about wages, hours, and working conditions in an effort to improve them. 

The NLRB established in Hispanics United of Buffalo that Facebook and other forms of social media are a valid platform for Section 7 concerted activity discussions. In this matter, an employee posted on Facebook about her frustrations with a fellow coworker and in that post, the employee asked her other coworkers how they felt about the situation. The employees involved in the Facebook discussion were discharged because of the posts. The Board found that firing these employees violated Section 8(a)(1). It was a violation because the NLRB viewed the Facebook posts as “concerted activity,” which therefore meant it was protected under Section 7

How to Balance Nonlawyer Employee’s NLRA Social Media Rights with the RPCs

While safeguarding client confidentiality is a core and essential component of lawyers’ duties to their clients, an outright social media ban on any discussion of the workplace will run afoul of the NLRA. In its Boeing Company decision, the NLRB established a two-pronged test to determine whether a facially neutral workplace rule would violate Section 7, and lead to a Section 8(a)(1) violation. The Board reviews two aspects: “(i) the nature and extent of the potential impact on NLRA rights, and (ii) legitimate justifications associated with the rule.”  

The NLRB issued an Advice Memorandum regarding an issue with an online disparagement rule from Stange Law Firm. Although the disparagement rule was a violation of Section 8(a)(1), the memo provides guidance on an employer’s use of a “savings clause.” 

“For a clause to cure a workplace rule that otherwise has an unlawful impact on Section 7 rights, the Board has said that the clause must do more than generally refer to the Act or Section 7 rights. An effective savings clause should address ‘the broad panoply of rights protected by Section 7‘ as well as be prominent and proximate to the rule that it purports to inform.”

A properly executed “savings clause” should effectively inform nonlawyer employees of their rights under Section 7. This clause should clearly differentiate their Section 7 rights from the rule prohibiting discussions of confidential client information on social media. Lawyers have legitimate justifications in protecting client confidences. 

Lawyers should already be providing robust confidentiality training to their nonlawyer employees. Nonlawyer employees should understand that confidential client information under RPC Rule 1.6(a) includes information relating to the representation of a client. This confidentiality training should provide guidance on appropriate social media use. While employees may have a right under the NLRA to discuss wages, hours and working conditions, that right does not give them permission to disclose any information that would identify a specific client or client matter. 

By properly training nonlawyer employees on maintaining client confidences, including on their personal social media accounts, lawyers can fulfill their professional obligations under the Rules of Professional Conduct. Lawyers should provide employees with specific examples of what they can and cannot post, and clearly explain what employee’s rights are under Section 7 of the NLRA. Such training will then – hopefully – put lawyers in compliance with both the Rules of Professional Conduct and the National Labor Relations Act.