My Path to Disinformation Resistance

Credit: https://search.creativecommons.org/photos/5627039f-f7f5-43d0-bfe7-a69f97d7cae8

Twenty years ago, after the 9/11 attacks revealed a paralysis in US information sharing, I left the private sector and joined the fight to help the Government. I was an expert in AI, information technology, and cognitive psychology, and I had been a DARPA-funded Principal Investigator for years. I had recently served as Chief Technology Officer for Software at Hewlett-Packard. I was alarmed that our FBI, CIA, NSA, and authorities could not synthesize intelligence data into a timely threat picture. If we were unable to connect giant dots that the 9/11 terrorists were leaving behind, how could anybody reasonably feel safe? Thus began my focus on the simple question: How should a society share information?

For years I focused on ways to reduce the glut of information flooding people working in defense. I spent ten years as a full Professor at the Naval Postgraduate School in Monterey, CA. My classes focused on the strategic opportunities presented by advances in computing and AI, and how the US government should exploit these. Meanwhile, policy makers were redesigning major organizations and their rules for information sharing.

Meanwhile, by 2011, it had become obvious to me that the greatest threat to US security was no longer coming from foreign terrorist cells. By then we could see that malevolent actors were capable of causing havoc through widespread disinformation campaigns. The playbook for those actors was pretty simple: Adopt an identity with a reasonable back story, infiltrate social networks by a combination of sharing and fabricating click-bait, and gain influence by becoming a source for sensational misinformation. Disinformation campaigners adopted technologies I know extremely well, including AI for chat bots, image manipulation, and computer-leveraged farms of foreigners with excellent language skills to run coordinated, sometimes massive disinformation campaigns on-line. The Internet by 2011 had become a perfect breeding ground for manufactured lies that had a high social media appeal. The simultaneous rise of 24-hour news on cable and streaming increased the pressure on media companies to publish click-worthy content quickly. Fact-checking took orders of magnitude more time than rapid publication of titillating material, and the game was quickly lost.

My 2011 book, Truthiness Fever, described my understanding of the problem, the threats, and the best ideas at the time for countering disinformation. Since then, I have observed the continuing deterioration of civil society, and many others now agree with my dire predictions. What have I learned since 2011?

  • Disinformation is a primary tool of political power in the 21st century
  • 24/7 availability of infotainment encourages people to seek out reinforcing data
  • Laws in the US ignore information pollution
  • People who have consumed vast amounts of bogus media are effectively brainwashed
  • Civil society requires honest and nonthreatening communication
  • If people have to pay for lying, they will resist doing so

That last observation I believe is the key insight of the last few years. In my earlier companies, TruthSeal and TruthMarket, we tested the idea that people wouldn’t lie if they had to guarantee the truth of their claims with money. I literally could find no company that would agree to pay a bounty to anyone who could falsify that company’s claims. Moreover, when we experimented with crowd funding, we found:

  • Truth vs. Falsity proved an unsuitable basis for assessing routine business claims, because many claims couldn’t reasonably be scientifically assessed
  • Trustworthiness vs. Untrustworthiness provides a practical and implementable basis for social collaborators to attack disinformation

Scientists know that knowledge evolves with time. All hypotheses and theories are considered “conditionally true” after being confirmed by experimental evidence that doesn’t invalidate them. Knowledge progresses mostly through successive refinements to overly general beliefs illuminated by disconfirming data. There is no finish line after which we have found the truth. As one example, TruthMarket established a bounty for anyone who could show that ordinary use of smart phones was safe. The claim is not sufficiently specific to test, and no one is likely to run an experiment that could convincingly resolve the question.

This is not just a metaphysical musing. We really must move the society towards truth and away from falsehoods, so it’s vital that we get clear on what we are asking people to do. Thus, I was happy to sign the Pro-Truth Pledge, because it asks every individual and organization to commit to truth-telling and avoid lying. That is 100% good, from my perspective, and we want to make truth-telling rewarding, while punishing liars. We therefore focus on social discourse and civil interactions. In everyday contexts, we need to decide who to trust, because we cannot fact check everything we receive. So, for us the key question is how to achieve widespread Trustworthiness?

Two years ago, I realized that personal responsibility and the risk of losing valued privileges might be sufficient to regulate communications, at least among those who value the trust of others. Purveyors of disinformation experience no negative consequences for their behavior. Current social media have created a perfect environment for gestating, evolving, and weaponizing harmful memes.

These observations motivated me to launch a new company, Trusted Origins Corp. (TOC), aimed at reducing the harmful effects of information pollution. The key motive behind TOC was to change the incentives, establishing honesty and civility as a prerequisite for membership in a Community of Trust (COT). Members who violate those standards would be banished. If people want access to such communities, they won’t break the rules.

The Internet is rife with liars who want audiences. Most Internet platforms grant access to anyone with an email address. This leads to troll farms with single individuals controlling hundreds or thousands of accounts, each robotically following the disinformation scripts of its master. Moreover, until just recently none of the most popular websites or apps removed anyone for violating community standards.

In my opinion, the keys to significant further progress include:

  • Make participation in civil society a valued good
  • Block miscreants from access

Each COT adopts trustworthiness protocols appropriate to its mission. Members opt-in to the community and agree to its protocols. Members authenticate their identity and subsequently stake their own personal reputations when communicating. Every COT blocks bots, trolls, shills and bullies, the worst sources of information pollution. If COTs become socially important, the perceived personal cost of banishment also grows. To stop people from lying, you have to make them pay. The risk of losing audience should deter those who prize the size and reach of their influence.

In 2021 we launched our premier COT, the Disinformation Resistance Community (DRC), at https://resistdisinfo.com. We seek active members who will help find disinformation and help blacklist liars. We provide automated TrustedSearch™ that anyone can use to get fact-checked answers to queries, We provide a dozen features active members use to pool efforts to separate Trusted articles from Distrusted ones. The DRC is the first of what we hope will become many COTs, each with a distinct focus, but all implementing trustworthiness protocols that monitor and enforce honest and civil behavior.

— Rick Roth

Chairman & CEO, Trusted Origins Corp.

Acting Governor, Disinformation Resistance Community

rick@trustedorigins.com

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.