Jess Miers on Gonzalez v. Google and the Supreme Court's plans for Section 230
Whether we like it or not, debates about online speech continue to focus on the largest, US-based platforms and their often headline-grabbing attempts to moderate content.
For all the important research and advocacy being done, this doesn't look like it will change any time soon, thanks to two cases currently in front of the US Supreme Court.
I've touched on Gonzalez v. Google and Twitter v. Taamneh before (EiM #193) but the complexity of the cases and my rudimentary knowledge of the US legal system meant that I wanted to speak to an expert for a while. Which is where today's interviewee comes in.
Jess Miers is Legal Advocacy Counsel at the Chamber of Progress, a US tech industry coalition working towards "public policies that will build a fairer, more inclusive country in which all people benefit from technological leaps". She is a former technologist, an expert on US intermediary liability law and an advisor to the Trust & Safety Professional Association, among others things. When the chance to have her on EiM came up, I jumped at it.
For this Q&A, Jess spoke to Iyal Basen, a recent MA graduate in International Relations at NYU and former Threat Analyst working at Google with Vaco, who I'm excited to have writing for EiM over the coming months. A big thanks to Iyal for making the connection and taking the time to talk to Jess.
It's a long interview —Jess and Iyal chatted for over an hour— but a brilliant summary of the Gonzalez v. Google case and its wider implications.
This interview has been lightly edited for clarity.
The first question that I have for you is this: what have been your key takeaways from the Gonzalez v. Google trial so far?
It's interesting. I actually got to see the Gonzalez trial in person. It was my first time visiting the Supreme Court, it was really cool. I walked into the courtroom thinking this is going to be it. Section 230 is going to go up in a ball of fire. And, you know, there's a reason why they took this case.
Some background on that is that there is no circuit split from the courts on the issue as to whether Section 230 protects algorithmic recommendations. And so when the Supreme Court actually agreed to take this case, it was a huge surprise to all of us in the Section 230 community. So we figured, okay, this is the opportunity for the Supreme Court to reinterpret Section 230. And that's the attitude I went in with, I left the courtroom really cautiously optimistic.
I don't know if you had a chance to listen to the oral arguments, but the justices appeared really sympathetic to the legal and technological complexities involved. And it seemed like they were recognizing that their decisions could irreparably impact the modern web, which is fantastic. Because you know, what’s at stake is the ability of these websites to effectively moderate and curate online content. To me, the Justices really honed in on how is this going to impact the creator economy? How is this going to impact the ability of the modern web to operate? Because the modern web uses algorithms. So, you know, as I said, I came out of the oral arguments feeling a little bit better than when I walked in.
Where it's at now? I've been asked by several folks to sort of read the tea leaves and say where is it going. I sort of wonder if the court is going to dismiss the case as improvidently granted because there really was no reason for this case to be in front of them in the first place. That and the petitioners themselves have changed the question so many times. The question that the court was considering on the day of oral arguments is very different from the first question they were asked originally when they granted cert. So we'll see what happens. But I'm hoping that it turns out that they dismiss it. If they come out with a very short pithy opinion about, of course, Section 230, and protects algorithmic recommendations, of third-party content, that would also be great. Worst case scenario, they read in some really ambiguous test regarding when Section 230 applies and doesn't apply to algorithms.
You mentioned some legal terms in there. So, I just want to clarify those for myself and for EiM readers as well. So what does “changing the question” mean?
Just kind of giving some context here on how Supreme Court procedure works; when you file a cert petition for the Supreme Court, you start with a question presented. And that is the question that you're asking the Supreme Court to consider. And usually, you're asking a question that could arise from a circuit split, which means that different circuits have come out on different conclusions, depending on the case and issues at hand. But essentially, you're asking the court to please decide this question.
The question that the Gonzalez petitioners originally put forward to the court when they were asking them to grant cert was whether Section 230 only applies to "traditional editorial functions." And that question itself is very broad. There is a lot that the Supreme Court can do with that question. They can take it at a First Amendment angle or they can get into the weeds. For example, what Justice Thomas was getting into in his 2019 Malwarebytes denial of certification, when they rejected cert, was along the lines of asking to opine on a Section 230 case that allowed him to get into this discussion of traditional editorial functions, for example. So it's a broader question, and it gives the court a lot of room to have a discussion. And so that's the question that the Court granted cert on.
Now, you're not supposed to change that question. That is the question that you're then going to brief as the petitioner and you're going to lay out your arguments for the Court as to how you think that question should be answered. The appellees, or the respondents, they will respond to that question as well. So you can understand why this question is not supposed to change.
In this case, the question has changed from this very broad "Does 230 only apply to traditional editorial functions?" to "Does section 230 apply to recommendations?" And you can see that question has narrowed. We're now in recommendation land. The narrower the question, the less room the court has to make a decision. And then the question changes again, and the question that was in front of the Court during the actual oral arguments was "Alright, we recognize that Section 230 may apply to some recommendations. Does section 230 apply when internet services do recommendations via algorithms?". And that question is insanely narrow.
You could kind of tell that the Supreme Court became confused by the question. I mean, the justices kept saying over and over again, "I'm confused," because the question has become so complicated. The petitioners are essentially asking the Supreme Court to draw this really arbitrary line between when Section 230 applies to the use of algorithms and its recommendation versus when it applies recommendations without algorithms. So I think that's how that question kind of got so convoluted and so difficult to answer. That was why we saw what we saw in the courtroom from the Justices that day.
Viewpoints are about sharing the wisdom of the smartest people working in online safety and content moderation so that you can stay ahead of the curve.
They will always be free to read thanks to the generous support of EiM members, who pay less than $2 a week to ensure insightful Q&As like this one and the weekly newsletter are accessible for everyone.
Join today as a monthly or yearly member and you'll also get regular analysis from me about how content moderation is changing the world — BW
Definitely. And then another point of clarification; you mentioned the term “improvidently granted”, can you define that and what that means?
Essentially, what the Justices can come back and do is say: "We didn't mean to grant this, this should not have been in our courtroom." And they can do that based on the fact that the petitioner is not supposed to change the question. So they [the Justices] could say, "Okay, well, the question has become something very different from what we granted cert on". So they could just dismiss it on those grounds and say, "We're not going to opine on this. We're not going to make a decision at all, in this case."
Thank you for helping me and EiM readers understand that. We covered this a bit in the last question as well but what can we expect from the proceedings of the Court going forward?
Yeah, like I said, there's there are kind of three possible outcomes.
The first one is that they dismiss it as improvidently granted, and again, that's the "We shouldn't have written this in the first place. The question has changed tremendously. There's nothing for us to do here" option. During the oral arguments, I think it was Kavanaugh who said "This doesn't belong in the Supreme Court. If you think that Congress was supposed to exclude algorithms from Section 230, then Congress is the one that needs to amend the law." So that's why I could possibly see them just dismissing this as there's nothing for them to do here.
Another potential outcome could be that they decide in favour of Google/YouTube here. And they could do it in a couple of ways. So they could say Section 230 protects algorithmic recommendations of third-party content and Google wins, that would be great. It would be a clean win, we would have no ambiguities, and it'd be an excellent message to send to the lower courts. I think that is the best-case scenario. But it’s also very childlike thinking on my part, I think. Hopeful, but unlikely.
But the other thing that they could do, is they could say, Google and YouTube win. However, they only do because Section 230 should only apply in this specific case. That would be entirely problematic if they went down the line of writing a very lengthy convoluted opinion which tries to say that when a service acts "neutrally," or uses neutral tools, or only some types of algorithmic targeting are protected by Section 230. That is going to create a massive ambiguity for plaintiffs to seize on in the lower courts and will set the lower court judges up to have to get into those very complex questions. I think that would probably be a bad scenario. It would not be good for Section 230 case law, even if Google and YouTube win in that situation.
The last option, of course, is they could just say Google and YouTube lose and Section 230 doesn't apply to algorithmic targeting or recommendations. That would be the worst outcome of all. It would signal to all of the lower courts that anytime a service uses an algorithm, whether it be to recommend content, curate content, etc, display content, they can't use Section 230. And I think, honestly speaking, that would break the functionality of the modern web today.
So then a follow-up question that I have is this: is this really a question for Congress to decide? You and I have talked about in the past how Congress has really just shirked responsibility on this question for my entire life, essentially, because Section 230 and I are practically the same age. So, if we were to look at the creation of either an amendment to Section 230, or create a completely different law targeting recommendation algorithms, what's the best-case scenario for that kind of legal development? If you could cookie cutter your perfect law, what would it look like?
That is a fantastic question. So honestly, and this should come as no surprise to you coming from me, I think Section 230 is working as intended for algorithms. The question has always been, is the defendant acting on third-party content? And so in the case with Gonzalez, and this is why I said earlier that it really should have been granted, this is such an obvious win, in my opinion, and an obvious application of Section 230. Is the harm derived from the underlying alleged ISIS propaganda on YouTube? That's where the harm is derived. YouTube didn't create that content, YouTube displayed that content.
The displaying of third-party content, whether you use an algorithm or not, is not covered in the First Amendment. There's nothing in Section 230 that says you can't use an algorithm that is protected by Section 230. We have plenty of case law and plenty of precedent to support that conclusion. So I think the question itself has become way more complicated than it needs to be. I'd say 230 is working as intended. I wouldn't amend for algorithmic recommendations.
However, we have been seeing, if I can pivot a little bit. We have been seeing the question coming out of Gonzalez, with these iterations of generative AI products like ChatGPT, for example. Should Section 230 protect generative AI? The reason that came up is because Justice Gorsuch brought it up during the oral arguments he had asked "Well, what about generative AI? Generative AI creates the content in whole or in part, that's not third-party content, right?"
I have done podcasts and I have written an article on this. But in that case, what I have been hearing from cybersecurity experts is maybe Congress needs to amend Section 230 and add in generative AI products to ensure they are protected as well. Or we need to have a separate Section 230-like legislation specifically for generative AI. And the reason for that is to think about what the outcome would be for generative AI products, if they don't have any protections here, the outputs are entirely driven by whatever a user asks it. Open AI, which owns ChatGPT, doesn't have any control over that. They don't have any real control over what the AI spits out.
You have all these sort of indie developers creating their own versions of ChatGPT and tools for ChatGPT. If they're made liable for something that the third-party user got the product to say or to output, then we might see these developers become very discouraged to innovate on these products and to offer these types of products. So that's where I would say maybe we need Congress to think about or act when it comes to changing Section 230 or adapting 230 for the next iteration of technology. However, I don't see Congress doing that. They think 230 is a mistake. I doubt Congress would say, alright, we have another 230, but just for generative AI.
Do you think that the Gonzalez v. Google, and the Twitter v. Taamneh questions, and the interest and concern surrounding TikTok will inspire Congress to modernize Section 230? Or is this all kind of political theatre?
Yeah, great question. I started out on the this is political theatre side of this, and I've been watching some of the recent hearings, including the recent TikTok hearing and the 230 one. What came up in both of those hearings was Congress noting that, depending on what the Supreme Court does in Gonzalez, Gonzalez might be the place to act. Not to mention again, the Justices even pointed to Congress as well and said, "Look, if you want to act, this isn't for us. You can act here if you wish to act here."
I think that what I'm really struggling with is that when it comes to the TikTok discussion outside of Gonzalez, there are a lot of components there. I don't think it's political theatre [but] I do think that the discussion is misinformed. The TikTok issues are separate from Section 230 issues and separate from the issues presented in Gonzalez and Twitter v. Taamneh. I think the Twitter v. Taamneh question will be kind of interesting depending on where the court goes because that one deals more with the national security angle. But with TikTok, there are a lot of issues such as what can the Chinese Communist Party (CCP) use TikTok for. Can they use the platform to invade US privacy and to get an edge on their competitive interests ahead of the US? I actually think those are very valid questions.
What worries me is we need to have comprehensive evidence of a national security threat from TikTok and I believe Committee on Foreign Investment in the United States is still doing that analysis now with TikTok. And the reason I say that is because I am worried the direction this is going to go is that the US decides there's a national security threat and they ban a communications platform. We really don't want to set that precedent. Today, it's TikTok, but tomorrow…
We saw with Trump; he passed an Executive Order to ban not just TikTok but also WhatsApp. He did it under this sort of thinly veiled label of national security. So when it comes to TikTok separately, it's going to depend on what comes out of the Department of Defense and those security reports.
I still think that even if the conclusion is that TikTok is a legitimate national security threat, we then have to ask what is the least restrictive means of solving that problem so that we are not just banning an entire communication platform. There are lots of folks that use TikTok and rely on it for legitimate uses. That'll be in tandem with the Gonzalez decisions. But, again, more issues will come up in TikTok than will come up and Gonzalez and Taamneh.
In regards to Taamneh, we're talking about content moderation. With TikTok, we're talking about national security and privacy issues, and then how to how to deal with those issues.
I'm completely with you. I was more thinking from the angle of interest in tech regulation and content moderation, because there hasn't been any appetite on that in a long, long time.
In that regard, then Gonzalez, Taamneh, and TikTok could potentially push the needle here. And I only say potentially, because remember, you still got this bipartisan issue where I don't know if it's really that the Democrats and Republicans agree on how to solve the TikTok issue in the same way they don't agree on how to solve the Section 230 debate, or how to solve the Gonzalez v. Google situation. There's a difference between whether Congress wants to act and whether they're actually going to be able to act, especially with the current makeup.
So, you talked about the dangerous precedent surrounding the RESTRICT Act, which is the potential legislation to ban TikTok. And that creates the dangerous precedent that we can ban anything based on national security. What would be your biggest worry surrounding that kind of norm from a legal perspective? I ask because we do have a strong precedent surrounding national security vastly changing laws. Look at the Patriot Act, for example, which implemented vastly unconstitutional practices regarding the surveillance of American citizens during peacetime. So, do you think that's continuing down that rabbit hole or what other worries do you have?
I think you're exactly right about the Patriot Act analogy. That's how I read it as well. This seems like the Patriot Act for the internet. And it's interesting. We go from this premise of TikTok is dangerous, TikTok is dangerous, we need to ban TikTok. Then we get to the RESTRICT Act. And when you actually read the Restrict Act, it's like yes, TikTok is considered but it's a much broader bill that would, in my opinion, impact VPN usage, for example. It's way broader than the use of TikTok.
And again, I guess my concern around this is the government. Well, national security is important to the point that you just made. Under the First Amendment, the government can't just declare national security threats, and then that's their answer to curbing the First Amendment. They can't, for example, declare a national security threat when it comes to let's say, if CNN was going to put out an interview with Osama bin Laden or an interview with Putin. The government couldn't claim national security and get that interview polled. We have very strong First Amendment interests in this country. So my concern with the RESTRICT Act is it would create a new precedent where it would show, if the government can demonstrate any sort of new any sort "national security threat," the government can ban communications platforms.
And I also worry about that precedent because not only is it about banning the communications platform, but it is not sticking to the First Amendment required scrutiny of agreeing that there is a national security threat. But, the next step in the First Amendment analysis is for the government to act only with the least restrictive means. So that could look like you know, making ByteDance do a divestiture of TikTok, for example. So ByteDance has to get rid of TikTok and sell it off to some US company, that would still allow TikTok to exist in the US. It wouldn't be a full-scale ban.
There are a lot of things that the government can do and should do and must do before we get to the all-out ban situation. Here, they're just jumping to the all-out ban. And again, we don't want to set that precedent. So that's what I worry about when it comes to the RESTRICT Act.
There is also a precedent for divestiture in the case with Grindr. Grindr was owned by a Chinese company. And then for it to continue operations here in the United States, it had to be divested and sold to an American company.
Exactly that.
Right now we're under a Democratic president, but say we get, God forbid, Donald Trump in 2024. What it says is that anybody who's in power gets to ban an app when they don't like it. Look at what happened with President Trump and Gen Z used TikTok to troll his rallies by buying all the tickets, and then he had no one in the audience. Trump could get up there and say, "Okay, well, TikTok's spam." He actually made an Executive Order that TikTok is spam. That's insane. And he could do it under this pretext of national security. That is what the RESTRICT Act is essentially doing.
What I will say, though, is unlike President Trump, I think that this administration is concerned about legitimate national security concerns. It's just that they're using a massive move to mitigate those issues when they could be doing other things, like divestiture, to get the same result.
Do you think there is kind of a double standard, shall we say, between TikTok, which is currently Chinese-owned, and the MAANG companies, which are American owned, and their practices? Should Congress be looking at regulating those practices, like harvesting and selling data on American citizens in the same kind of way, regardless of the country of national origin?
It's an interesting question. It's a really good question. I'm gonna give the lawyer answer; I think it depends. I think the problem here is the intent of the data and how the data is being used. And so again, if we have evidence that CCP is using data to spy on Americans in a way that allows China to get their political interests —let's say, for example, Byte Dance is using TikTok to promote Chinese propaganda, which includes misinformation/disinformation, and they're collecting data to get that agenda across or they’re using it to spy on the United States and government employees to get access to national secrets— that is a major problem compared to how tech in the US uses and collects data.
When we talk about these MAANG companies collecting user data, they're using it typically for the interest of one, making their services more relevant, and two, promote higher quality information. Because that in turn makes the service better, because if a user is seeing information that is relevant to them, they will probably keep using the platform. I think that's why people really like using TikTok. But, we have also seen situations like Cambridge Analytica and Facebook, where that data is being misused and the government rightfully acted in that regard.
So I'm not sure if I would call it a double standard, because the interest, the intent, the motives behind the way that data is collected are different when it comes to China and the US. But at the same time, I think it's important for the government to continue to keep an eye on the way that these major tech companies are using our data and to act when it makes sense to act.
The point I'm trying to make is that, in my opinion, I should say, that both the American companies, the MAANGS and TikTok are both "spying" on American citizens, right?
I think that's a loaded word. When I use the word spying, particularly in the Chinese context, I'm using it more from a militaristic point of view. But I mean, hey, collecting information on users to figure out how to better promote and keep them online, that's a form of spying. So, I see that point.
They're both forms of espionage, but for different kinds of purposes, like corporate espionage versus militaristic. So, in your point of view, just to make sure I understand correctly, is that the data collection should be scrutinized differently because they are of different intents?
I think so. I'm not sure if the way that the MAANG companies collect data poses a national security threat in the same way that potentially the Chinese collection of data and the use of that data poses a national security threat. So I think with anything, it comes down to, instead of just doing a blanket ban, instead of just doing blanket regulation, or blanket enforcement, we need to consider how the data is being used, why the data is being used, and then act from there. And I think Cambridge Analytica is your perfect example. That was an unacceptable use of data by Facebook, and our regulatory agencies acted correctly.
When I look at data collection, and the selling of it to third-party advertisers, I look at that as a vast denigration of user privacy rights and data ownership and data privacy. And that's a practice that worries me.
I think it's worth analyzing those practices but probably under a different lens than the way we analyze practices in China. We should be analyzing it against existing state laws on the collection of data and data privacy. We should be looking at that from the perspective of whether these companies violating those state laws. And if they are, we should act. Also, I would say I'm very much in favour of federal privacy legislation as well. That even touches on the sale of third-party data to advertisers and the collection and use of that data. So that is absolutely an opportunity where Congress could act.
I want to give you an opportunity now to talk about what Chamber of Progress is doing in Utah. And so first, if you could frame what's going on, for me and for EiM readers so we understand what's at stake and what you're planning to do about it.
Thank you so much for giving me the opportunity to rant about this issue. It makes me very angry. You will likely appreciate this, given the discussion we just had about the collection of user data. So in Utah, Governor Cox just signed two social media bills. The first one is SB 152. The second is HB 311. Both of those bills essentially require internet services to collect to do age perform age verification on their users. And that age verification process, which is not defined yet, has been outsourced to the Utah Department of Commerce to come up with rulemaking for how that's appropriate. But essentially, it would require this age verification component, parental consent and parental access to the social media accounts of anyone below the age of 18. If the user is under the age threshold, they have to get formal consent from a parent to actually sign up for social media and to have an account. And then that parent is also granted access to their child user's DMs, for example, or any data or information that has to do with that account for the underage user.
As you can imagine, this is a massive privacy intrusion for minor users. We believe in this country [the US] that minors have privacy rights just as much as adults, even though parents do have a right to parent their kids. We very much advocate for minor rights here. And, one thing that's that worries me about the parental consent requirements, and the access to the sensitive information is what happens when you have a kid that is growing up in an abusive household, or a kid who identifies with the LGBTQ+ community in Utah, of all places. And they're trying to get access to resources. First will they even be able to access those resources as somebody who is under the age threshold? And if they do access those resources with parental consent, do they want their parents to be so highly invested in what they're doing, which could put them at grave risk?
So the big issue here with regards to age verification —and we're seeing this age verification requirement come up in a lot of other states as well— is that it would require the tech companies to do something. You're using a strict liability standard here. So if you get it wrong, and you allow a user on your service that's under the age threshold, you're liable. This means that they have to check everyone, not just underage users, but you and I as well. And what that can mean is giving up your government IDs and passports. I've seen some discussions about facial recognition and collecting biometrics. So it's ironic, because you know, this, these bills are framed as protecting the kids online. But it's requiring these internet companies, these big tech companies to collect more information to be able to do this age verification in the first place. That’s essentially what Utah is trying to do, and other states are trying to do as well.
Our plans right now for Chamber of Progress —and I would frame this ideally, as there are a lot of things in motion right no so we don't want to give any guarantees that we're going to be suing— but we are interested in litigation. And I know of a couple of other organizations that are interested in pursuing litigation as well. A First Amendment challenge would be the obvious one, being that you are not only restricting the rights of minors to be able to access legally acceptable information but also adult users who don’t want to give over this personal identifying information. So that's the issue in Utah. The current plans are we are interested in and thinking about litigating, with more details to come.
I'm very excited to see what you do. And I was also listening to you, you could also sue under the Fourth Amendment right to privacy, no?
There are some Fourth Amendment claims there. Another big one too would be that the book bills are extremely big. Even if the companies were to comply, we don't know what age verification means because there hasn't been any rulemaking on it yet. And there are other terminologies throughout the bills that are extremely vague. So compliance would be difficult too.
So, your objection mainly is around the vagaries surrounding what age verification means, but also the minor rights and privacy rights thereof?
Yeah. And again, it's tricky because parents have to raise their children. But, at the same time, I actually would argue that these bills interfere with that right as well for the parents. Because it is automatically enforced. This isn't something where parents have a discussion here. If you're under the age threshold, you are restricted from having an account. There are a lot of sticky issues here. But I think mostly we're talking about the strongest issue which would be the First Amendment.
Then you talked about the broader issue of age verification and accessing content. In most cases on the internet, now you can access pornography, as long as you click I am over the age of 18. There was an article about this law in Louisiana that was being debated that would require age verification to visit pornographic websites. This, to me, makes sense, because, for me, exposure to this content at an early age impacts your psycho-sociological development. In my opinion, I think there should be something where it requires actual age verification to access things that could be harmful.
I agree with you on this. But, I think what we need to think about is the consequences of forcing age verification mechanisms, because again, even in Louisiana, this means you now, as an adult, have to identify yourself to the government that you are watching pornography, which may not be the best outcome.
If there are better ways to do that identification, that would be great. It's not just about kids; we are impacting the use of the internet for all people. And again, especially think of these red states where you have got LGBTQ folks that are trying to access this information in a place that is extremely hostile to their well-being. I wouldn't want to have my information on the record with the government either.
Especially when it comes to states like Texas, where accessing gender-affirming care and reproductive rights are illegal right now and the government is trying to create registries of people who are trying to access it for suits or incarceration. All those wonderful, very fascist things.
It's horrible. Keep in mind, too, that when they collect this information under the pretext of age verification, there is nothing that stops a state from then subpoenaing Google or Facebook or any of these internet companies that have now collected the data. If I have to use age verification to get access to a reproductive rights website, nothing stops a Texas Attorney General from then subpoenaing that website to give over that information. Now, there are some legal defences that the websites have. But that information is not anonymized. It's now out there, you can subpoena and ask, "Is Jess Miers looking at reproductive health information on this website? So that's the kind of stuff that keeps me up at night.
Me too. And I'm thinking about the protections of LGBTQ folks online. And those issues are very much global. And they're here at home, like what we covered in the state of Texas, and also surrounding other kinds of issues like doxxing or online harassment. And there are so many instances of LGBTQ folks who have their data doxxed and posted. The question remains, how do we better? How do we better push these tech companies to better protect LGBTQ and reproductive health seekers online?
Verification is not the way. That's, you know, right now you're identified with the company in that regard.
You've very much changed my mind on age verification.
But I do agree with you. We need something to protect kids from awful content. We need a way to get there, though, that doesn't force these companies to also collect information on these folks. So I'm at a loss there, but I don't agree with the premise.
Yeah, the premise but not the solution.
Yeah. And maybe we'll get there. I've seen, and I'll send you an article in case it's interesting, discussions about blockchain identification, which anonymizes [the identification] to allow the service to figure out what your age is, but anonymizes the rest of your personal information. So the website can't see it and states can't see it. It could be an interesting solution. But I wouldn't want to see regulation around forced age verification. I think COPPA does its job. But if we did have to get there, I would want to see technologies like that versus, you know, facial recognition, which is insane.
Could you go into more about COPPA, for those of us who don't know?
It’s the Child Online Privacy Protection Act. And it essentially requires internet services to see if they have actual knowledge —that's important— if they have actual knowledge that a user is under the age of 13. And there are specific guidelines that the internet service has to follow when it targets the information of those under 13.
Now, actual knowledge is a very difficult standard for plaintiffs or regulatory agencies to meet. The service itself has to be able to know that this specific person, at this specific time, at this specific age, was using [that platform] or viewing that kind of content, or creating an account. And so that way, services get around actual knowledge. It generally comes in the form of a checkbox that says "I am 13 or older." That counts. They don't get actual knowledge of somebody being underage.
Of course, anybody can check that box. That's sort of what some of the issues are that folks have with COPPA. What age verification laws are requiring is constructive knowledge. And that's a way looser standard. And it basically says that if the service at any time can be accessed by somebody who was a minor, then they have knowledge. And as you can imagine, that is why these services would then be forced to not get it wrong. They would have to know the ages of every single one of their users. Because in the case that one minor uses their website, they're on the hook for all minors using their website. So that's [why] we have COPPA in place right now, for those 13 and under. Folks are trying to get a stricter version of COPPA through these age verification laws.
As you said, the premise is good. I think preventing minors from accessing harmful information is something that needs to happen. But the solution, or the potential solution, is actually deleterious to people.
And that's why COPPA has that actual knowledge component. And because, the opposite, forces the constructive again. It forces internet companies to collect a lot of data. But then also, you know, we are talking about preventing minors from accessing "harmful information" What is “harmful”? In Texas, harmful information is LGBTQ+ resources and abortion content.
It's extremely subjective. How can you define what is harmful? Because someone would argue, just spouting off hateful things on the internet isn't necessarily harmful. But then you have the whole concept of stochastic terrorism, where if you were if you shout epithets and harmful slurs enough, someone will take action. And, you would know better than me, like is like, has that ever been cited in case law?
No. That's a tougher question. Because under the First Amendment, broadly speaking, even those harmful epithets would be protected, which is actually why it's really hard to create laws around "harmful content." The First Amendment is not very forgiving when it comes to certain types of content that are not allowed. I think folks try to get around it with sort of hate speech exceptions. And the one exception of where speech incites violence. But because that scrutiny is so high when it comes to First Amendment, the courts are very unlikely to award in the favor of the government for things that might you might think to incite violence.
Take Donald Trump, for example; he was literally tweeting during January 6th "go get 'em." That may not amount to an incitement of violence in that situation. So it's just very difficult with the First Amendment to get into these definitions. And the more that government tries to define it, it's counterproductive. Iit would be helpful if we had definitions about what "harmful content" and "hate speech" were. But the more the government tries to define it, the more they are foreclosing on speech. And so you get even more scrutiny there. Because the government really can't say, well, this kind of speech is harmful and this kind of speech isn't. And again, imagine you got a Republican government; nothing stops a lawmaker from writing harmful content is the state of Texas is LGBTQ+ and, you know, transgender, trans-rights-related information.
Do you think there is room for lawmakers, like Congress, to decide what hate speech is and that would be more helpful?
The courts have tried to get into that. They've tried to have decisions where they draw lines on to what hate speech is, and isn't. To be honest with you, I'm actually not sure where I stand on whether it would be helpful for the government to clarify hate speech. But right now, it's judge-driven definitions at the moment.
Okay, so in terms of understanding how that works? Is it up to each judge to define what hate speech is?
So, for example, you could have the State of Texas, let’s say, the Fifth Circuit; they could come down with an opinion that says it does not violate the First Amendment for Texas to put out a law that states you know, xyz type of content is illegal. And we're already kind of seeing that; Texas came out with the recent ban on internet services catering to reproductive health information. It can be state driven.
But, remember, these kinds of First Amendment discussions will go up to the Supreme Court —as we saw with the ACLU v. Reno— and it'll be likely up to the Supreme Court to make those decisions. Right now, in Texas and Florida, we've got two potential Supreme Court cases that have to do with their laws that would ban internet services from de-platforming political candidates. You know, that's a huge First Amendment concern as well. It's going to be left up to the Supreme Court to decide so that's scary.
I don't think Congress would be able to write a law that says hate speech in the United States is xyz, because even then, when you decide hate speech in the United States is fake terrorist content what does that mean? If the UN puts out videos of horrible atrocities taking place in non-democratic societies, and they're doing it because they want to display the human rights violations, that's very different from ISIS propaganda. So how do you define which one of those is acceptable? To figure out when you draw lines becomes impossible on speech.
So it requires scrutinizing of intent.
Yeah, intent. It also requires the least restrictive means. There's a whole test involved with First Amendment scrutiny. And there it's very strict when it comes to regulating speech itself.
Because I've often thought about why isn't there a law that defines what hate speech is, and then can prosecute people for it?
Yeah...the First Amendment!
This First Amendment is tough, man
It is. Look at the things that you're asking about. In the EU, for example, they actually have laws. NetzDG is one of them in Germany. They have laws about how you can't have Nazi propaganda. You know, we have lost it. We have an amendment in the Constitution that says, there shall be no elements of slavery and whatnot, but we still allow folks to have Confederate flags. But you know, in the EU, they don't value free speech as much as this country does. And so that's why you see those laws developing there compared to our laws here.
Jess, thank you for joining me and for answering my questions so wonderfully.
Fantastic questions. Thank you for this wonderful interview.
Member discussion