Editor’s Commentary: I do not censor opinions, even if I believe they’re wrong. Most of this article aligns with our perspectives on freedom from Big Tech tyranny and the need to allow research to flourish without political considerations. Some of the author’s examples and conclusions differ from ours, including their labeling of Covid-19 “vaccine” warnings as “misinformation.”
We, of course, do not support the worldwide vaccination campaign nor the draconian mandates being put into place in America. Nevertheless, the article as a whole stands up to fascist Facebook properly. Here’s Rory Mir and Cory Doctorow from EFF…
Facebook recently banned the accounts of several New York University (NYU) researchers who run Ad Observer, an accountability project that tracks paid disinformation, from its platform. This has major implications: not just for transparency, but for user autonomy and the fight for interoperable software.
Ad Observer is a free/open source browser extension used to collect Facebook ads for independent scrutiny. Facebook has long opposed the project, but its latest decision to attack Laura Edelson and her team is a powerful new blow to transparency. Worse, Facebook has spun this bullying as defending user privacy. This “privacywashing” is a dangerous practice that muddies the waters about where real privacy threats come from. And to make matters worse, the company has been gilding such excuses with legally indefensible claims about the enforceability of its terms of service.
Taken as a whole, Facebook’s sordid war on Ad Observer and accountability is a perfect illustration of how the company warps the narrative around user rights. Facebook is framing the conflict as one between transparency and privacy, implying that a user’s choice to share information about their own experience on the platform is an unacceptable security risk. This is disingenuous and wrong.
This story is a parable about the need for data autonomy, protection, and transparency—and how Competitive Compatibility (AKA “comcom” or “adversarial interoperability”) should play a role in securing them.
What is Ad Observer?
Facebook’s ad-targeting tools are the heart of its business, yet for users on the platform they are shrouded in secrecy. Facebook collects information on users from a vast and growing array of sources, then categorizes each user with hundreds or thousands of tags based on their perceived interests or lifestyle. The company then sells the ability to use these categories to reach users through micro-targeted ads. User categories can be weirdly specific, cover sensitive interests, and be used in discriminatory ways, yet according to a 2019 Pew survey 74% of users weren’t even aware these categories exist.
To unveil how political ads use this system, ProPublica launched its Political Ad Collector project in 2017. Anyone could participate by installing a browser extension called “Ad Observer,” which copies (or “scrapes”) the ads they see along with information provided under each ad’s “Why am I seeing this ad?” link. The tool then submits this information to researchers behind the project, which as of last year was NYU Engineering’s Cybersecurity for Democracy.
The extension never included any personally identifying information—simply data about how advertisers target users. In aggregate, however, the information shared by thousands of Ad Observer users revealed how advertisers use the platform’s surveillance-based ad targeting tools.
This improved transparency is important to better understand how misinformation spreads online, and Facebook’s own practices for addressing it. While Facebook claims it “do[es]n’t allow misinformation in [its] ads”, it has been hesitant to block false political ads, and it continues to provide tools that enable fringe interests to shape public debate and scam users. For example, two groups were found to be funding the majority of antivaccine ads on the platform in 2019. More recently, the U.S. Surgeon General spoke out on the platform’s role in misinformation during the COVID-19 pandemic—and just this week Facebook stopped a Russian advertising agency from using the platform to spread misinformation about COVID-19 vaccines. Everyone from oil and gas companies to political campaigns has used Facebook to push their own twisted narratives and erode public discourse.
Revealing the secrets behind this surveillance-based ecosystem to public scrutiny is the first step in reclaiming our public discourse. Content moderation at scale is notoriously difficult, and it’s unsurprising that Facebook has failed again and again. But given the right tools, researchers, journalists, and members of the public can monitor ads themselves to shed light on misinformation campaigns. Just in the past year Ad Observer has yielded important insights, including how political campaigns and major corporations buy the right to propagate misinformation on the platform.
Facebook does maintain its own “Ad Library” and research portal. The former has been unreliable and difficult to use without offering information about targeting based on user categories; the latter comes swathed in secrecy and requires researchers to allow Facebook to suppress their findings. Facebook’s attacks on the NYU research team speak volumes about the company’s real “privacy” priority: defending the secrecy of its paying customers—the shadowy operators pouring millions into paid disinformation campaigns.
This isn’t the first time Facebook has attempted to crush the Ad Observer project. In January 2019, Facebook made critical changes to the way its website works, temporarily preventing Ad Observer and other tools from gathering data about how ads are targeted. Then, on the eve of the hotly contested 2020 U.S. national elections, Facebook sent a dire legal threat to the NYU researchers, demanding the project cease operation and delete all collected data. Facebook took the position that any data collection through “automated means” (like web scraping) is against the site’s terms of service. But hidden behind the jargon is the simple truth that “scraping” is no different than a user copying and pasting. Automation here is just a matter of convenience, with no unique or additional information being revealed. Any data collected by a browser plugin is already, rightfully, available to the user of the browser. The only potential issue with plugins “scraping” data is if it happens without a user’s consent, which has never been the case with Ad Observer.
Another issue EFF emphasized at the time is that Facebook has a history of dubious legal claims that such violations of service terms are violations of the Computer Fraud and Abuse Act (CFAA). That is, if you copy and paste content from any of the company’s services in an automated way (without its blessing), Facebook thinks you are committing a federal crime. If this outrageous interpretation of the law were to hold, it would have a debilitating impact on the efforts of journalists, researchers, archivists, and everyday users. Fortunately, a recent U.S. Supreme Court decision dealt a blow to this interpretation of the CFAA.
Last time around, Facebook’s attack on Ad Observer generated enough public backlash that it seemed Facebook was going to do the sensible thing and back down from its fight with the researchers. Last week however, it turned out that this was not the case.
Facebook’s Bogus Justifications
Facebook’s Product Management Director, Mike Clark, published a blog post defending the company’s decision to ban the NYU researchers from the platform. Clark’s message mirrored the rationale offered back in October by then-Advertising Integrity Chair Rob Leathern (who has since left for Google). These company spokespeople have made misleading claims about the privacy risk that Ad Observer posed, and then used these smears to accuse the NYU team of violating Facebook users’ privacy. The only thing that was being “violated” was Facebook’s secrecy, which allowed it to make claims about fighting paid disinformation without subjecting them to public scrutiny.
Secrecy is not privacy. A secret is something no one else knows. Privacy is when you get to decide who knows information about you. Since Ad Observer users made an informed choice to share the information about the ads Facebook showed them, the project is perfectly compatible with privacy. In fact, the project exemplifies how to do selective data sharing for public interest reasons in a way that respects user consent.
In Clark’s post defending Facebook’s war on accountability, he claimed that the company had no choice but to shut down Ad Observer, thanks to a “consent decree” with the Federal Trade Commission (FTC). This order imposed after the Cambridge Analytica scandal requires the company to strictly monitor third-party apps on the platform. This excuse was obviously not true, as a casual reading of the consent decree makes clear. If there was any doubt, it was erased when the FTC’s acting director of the Bureau of Consumer Protection, Sam Levine, published an open letter to Mark Zuckerberg calling this invocation of the consent decree “misleading,” adding that nothing in the FTC’s order bars Facebook from permitting good-faith research. Levine added, “[W]e hope that the company is not invoking privacy – much less the FTC consent order – as a pretext to advance other aims.” This shamed Facebook into a humiliating climbdown in which it admitted that the consent decree did not force them to disable the researchers’ accounts.
Facebook’s anti-Ad Observer spin relies on both overt and implicit tactics of deception. It’s not just false claims about FTC orders—there’s also subtler work, like publishing a blog post about the affair entitled “Research Cannot Be the Justification for Compromising People’s Privacy,” which invoked the infamous Cambridge Analytica scandal of 2018. This seeks to muddy any distinction between the actions of a sleazy for-profit disinformation outfit with those of a scrappy band of academic transparency researchers.
Let’s be clear; Cambridge Analytica is nothing like Ad Observer. Cambridge Analytica did its dirty work by deceiving users, tricking them into using a “personality quiz” app that siphoned away both their personal data and that of their Facebook “friends,” using a feature provided by the Facebook API. This information was packaged and sold to political campaigns as a devastating, AI-powered, Big Data mind-control ray, and saw extensive use in the 2016 US presidential election. Cambridge Analytica gathered this data and attempted to weaponize it by using Facebook’s own developer tools (tools that were already known to leak data), without meaningful user consent and with no public scrutiny. The slimy practices of the Cambridge Analytica firm bear absolutely no resemblance to the efforts of the NYU researchers, who have prioritized consent and transparency in all aspects of their project.
An Innovation-Killing Pretext
Facebook has shown that it can’t be trusted to present the facts about Ad Observer in good faith. The company has conflated Cambridge Analytica’s deceptive tactics with NYU’s public interest research; it’s conflated violating its terms of service with violating federal cybersecurity law; and it’s conflated the privacy of its users with secrecy for its paying advertisers.
Mark Zuckerberg has claimed he supports an “information fiduciary” relationship with users. This is the idea that companies should be obligated to protect the user information they collect. That would be great, but not all fiduciaries are equal. A sound information fiduciary system would safeguard users’ true control over how they share this information in the first place. For Facebook to be a true information fiduciary, it would have to protect users from unnecessary data collection by first parties like Facebook itself. Instead, Facebook says it has a duty to protect user data from the users themselves.
Even some Facebookers are disappointed with their company’s secrecy and anti-accountability measures. According to a New York Times report, there’s a raging internal debate about transparency after Facebook dismantled the team responsible for its content tracing tool CrowdTangle. According to interviewees, there’s a sizable internal faction at Facebook that sees the value of sharing how the platform operates (warts and all), and a cadre of senior execs who want to bury this information. (Facebook disputes this.) Combine this with Facebook’s attack on public research, and you get a picture of a company that wants to burnish its reputation by hiding its sins from the billions of people who rely on it under the guise of privacy, instead of owning those mistakes and making amends for them.
Facebook’s reputation-laundering spills out into its relationship with app developers. The company routinely uses privacy-washing as a pretext to kill external projects, a practice so widespread it’s got a special name: “getting Sherlocked.” Last year EFF weighed in on another case where Facebook abused the CFAA to demand that the “Friendly browser” cease operation. Friendly allows users to control the appearance of Facebook while they use it, and doesn’t collect any user data or make use of Facebook’s API. Nevertheless, the company sent dire legal threats to its developers, which EFF countered in a letter that demolished the company’s legal claims. This pattern played out again recently with the open source Instagram app Barinsta, which received a cease and desist notice from the company.
When developers go against Facebook, the company uses all of its leverage as a platform to respond full tilt. Facebook doesn’t just kill your competing project: they deplatform you, burden you with legal threats, and brick any of your hardware that requires a Facebook login.
What to do
Facebook is facing a vast amount of public backlash (again!). Several U.S. senators sent Zuckerberg a letter asking him to clarify the company’s actions. Over 200 academics signed a letter in solidarity with Laura Edelson and the other banned researchers. One simple remedy is clearly necessary: Facebook must reinstate all of the accounts of the NYU research team. Management should also listen to the workers at Facebook calling for greater transparency, and furthermore cease all CFAA legal threats to not just researchers, but anyone accessing their own information in an automated way.
This Ad Observer saga provides even more evidence that users cannot trust Facebook to act as an impartial and publicly accountable platform on its own. That’s why we need tools to take that choice out of Facebook’s hands. Ad Observer is a prime example of competitive compatibility—grassroots interoperability without permission. To prevent further misuse of the CFAA to shut down interoperability, courts and legislators must make it clear that anti-hacking laws don’t apply to competitive compatibility. Furthermore, platforms as big as Facebook should be obligated to loosen their grip on user information, and open up automated access to basic, useful data that users and competitors need. Legislation like the ACCESS Act would do just that, which is why we need to make sure it delivers.
We need the ability to alter Facebook to suit our needs, even when Facebook’s management and shareholders try to stand in the way.
The Dangers of Speaking the Truth Diminish If We Work Together
It’s becoming harder and harder for patriots to ignore the deep suppression of truth that’s happening in America today.
In all of my years in journalism, I have never received as many threats or been attacked by big companies like Google and Facebook as I have in 2021. I’d say that ever since we started covering widespread voter fraud, government-endorsed Pandemic Panic Theater, vaccine cover-ups, Critical Race Theory, and the various Neo-Marxist and Satanic agendas at play, I’ve been targeted more in months than the entirety of my life prior.
Speaking the truth is getting harder with so much censorship and suppression rampant. Prior to 2020, I was not a “conspiracy theorist” or an “anti-vaxxer,” but if there’s one thing the onslaught of exposed lies have taught us in the last 18 months, it’s that we cannot take what we’re told by the “arbiters of truth” at face value. There’s an agenda behind every message, a narrative driving every story, and a series of gigantic cover-ups designed to keep the masses in the dark.
This is why we’re building a network of news outlets that are willing to go against the narrative and expose the truth. We need help. We’re establishing strong partnerships with like-minded news outlets and courageous journalists. Even as Big Tech suppresses us, the honest messages they’re trying to quash are finding their way to the eyes and ears of patriots across the nation. With the help of new content partners like The Epoch Times and The Liberty Daily, we’re starting to see a real impact.
Our network is currently comprised of nine sites:
- NOQ Report
- Conservative Playlist
- Truth. Based. Media.
- Freedom First Network
- Based Underground
- Uncanceled News
- American Conservative Movement
- Conservative Playbook
- Our Gold Guy
Some of our content is spread across all of these sites. Other pieces of content are unique. We write most of what we post but we also draw from those willing to allow us to share their quality articles, videos, and podcasts. We collect the best content from fellow conservative sites that give us permission to republish them. We’re not ego-driven; I’d much rather post a properly attributed story written by experts like Dr. Joseph Mercola or Natural News than rewrite it like so many outlets like to do. We’re not here to take credit. We’re here to spread the truth.
I’ve said much of this before. From time to time I reframe this request for assistance by taking the most relevant message of the day and adjusting the story accordingly. We’ve discussed this network in previous articles. Now, it’s time to talk about help. First and foremost, we need financial assistance detailed below. But we could also use more writers who are willing to volunteer their thoughts for the sake of spreading the message. Those who are interested should contact me directly.
As far as money, we’re looking better than we have in the recent past, but we are currently experiencing a gap between revenue and expenses that cannot be overcome by click-ads and MyPillow promos alone (promo code “NOQ” by the way).
To overcome our revenue gap and keep these sites running, our needs fluctuate between $2200-$7800 per month. May, 2021, for example, was amazing and we almost broke even. June, revenue was sluggish at best and we had to make up a big difference out of our pockets. But we’re not just trying to get out of the red. If and when we start getting enough contributions to expand, we will do just that. Very few get into journalism to try to get rich and we’re definitely not among those who do. Our success is driven by spreading the truth, profitable or not.
The best way you can help us grow and continue to bring proper news and opinions to the people is by donating. We appreciate everything, whether a dollar or $10,000. Anything brings us closer to a point of stability when we can hire writers, editors, and support staff to make the America First message louder. Our Giving Fuel page makes it easy to donate one-time or monthly. Alternatively, you can donate through PayPal or Bitcoin as well. Bitcoin: 3A1ELVhGgrwrypwTJhPwnaTVGmuqyQrMB8
Time is short. As the world spirals towards radical progressivism, the need for truthful journalism has never been greater. But in these times, we need as many conservative media voices as possible. Please help keep NOQ Report and the other sites in the network going. Our promise is this: We will never sell out America. If that means we’re going to struggle for a while or even indefinitely, so be it. Integrity first. Truth first. America first.
Thank you and God Bless,
New News Aggregator — Truth. Based. Media. — “Better than Drudge Report, plus unlike Drudge they love America!”