During his inauguration, President Joe Biden appealed to us, American citizens, repeatedly and emphatically, to defend unity and truth against corrosion from power and profit. Fortunately, the bedrock tensions between unity, truth, power and profit have newly-discovered mathematical definitions, so their formerly mysterious interactions can now be quantified, predicted and addressed. So in strictly (deeply) scientific terms, Biden described our core problem exactly right.
Can We Build Social Trust in an Online World?
I applaud and validate President Biden’s distillation of the problem of finding and keeping the truth, and of trusting it together. Human trust is based on high-speed neuromechanical interaction between living creatures. Other kinds of trust not based on that are fake to some degree. Lies created for money and power damage trust most of all.
A Moment of Silence
As Biden showed in his first act in office, the first step toward rebuilding is a moment of silence. Avoiding words, slowing down, taking time, breathing, acknowledging common grievances and recognizing a common purpose are not just human needs, but necessary algorithmic steps as well. Those are essential to setting up our common strategy and gathering the starting data that we need to make things right.
The next step, as Biden also said, is to recognize corrupting forces such as money and power — and I would also add recognition. The third step, as I propose below, is to counter those three forces explicitly in our quest for public truth, to do the exact opposite of what money, power and careerism do, and to counter and reverse every information-processing step at which money, power and recognition might get a hold.
Instead of using one panel of famous, well-funded experts deliberating a few hours in public, employ a dozen groups of anonymous lone geniuses, each group working separately in secret for months on the same common question. Have them release their reports simultaneously in multiple media. That way, the unplanned overlap shows most of what matters and a path to resolving the rest — an idea so crazy it just might work.
Since I’m describing how to restore democracy algorithmically, I might as well provide an example of legislation in the algorithmic language too. To convey data-processing ideas clearly, and thereby to avoid wasting time and money building a system that won’t work, technologists display our proposals using oversimplified examples that software architects like myself call “reference implementations” and which narrative architects like my partner call “tutor texts.”
These examples are not meant to actually work, but to unambiguously show off crucial principles. In the spirit of reference implementations, I present the following legislative proposal, written to get to the truth about one particular subject but easily rewritten to find the truth about other subjects such as global warming or fake news: The Defend the Growing Human Nervous System With Information Sciences Act.
The Defend Act
Over centuries, humankind has defended its children against physical extremes, dangerous chemicals and infectious organisms by resolute, rational application of the laws of nature via technology and medical science. Now is the time to use those same tools to defend our children’s growing nervous systems against the informational damage that presently undermines their trust in themselves, their families and their communities. Therefore, we here apply information science in order to understand how man-made communication helps and hurts the humans whom God made.
The human race has discovered elemental universal laws governing processes from combustion to gravitation and from them created great and terrible technologies from fire and weapons to electricity grids and thermonuclear reactions. But no laws are more elemental than the laws of data and mathematics, and no technologies more universal and fast-growing than the mathematically-grounded technologies of information capture, processing and dissemination. Information science is changing the world we live in and, therefore, changing us as living, breathing human beings. How?
The human race has dealt with challenges from its own technologies before. Slash-and-burn tactics eroded farmland; lead pipes poisoned water; city wells spread cholera; radioactivity caused cancer; refrigerants depleted ozone. And we have dealt with epidemics that propagated in weird and novel ways — both communicable diseases spread by touch, by body fluids, by insects, by behaviors, by drinking water, by food, and debilitating diseases of chemical imbalance, genetic dysregulation, immune collapse and misfolded proteins. Our science has both created and solved monumental problems.
But just as no technology is more powerful than the information sciences, when deployed against an immature, growing, still-learning human nervous system, no toxin is more insidious than extractive or exploitive artificial information.
The Defend the Growing Human Nervous System With Information Sciences Act aims to understand first and foremost the depth and texture of the threat to growing human nervous systems in order to communicate the problem to the public at large (not to solve the problem yet). This act’s approach is based on five premises about the newly-discovered sciences of information.
First of all, there is an urgent global mental-health crisis tightly correlated over decades with consuming unnatural sensory inputs (such as from TV screens) and interacting in unnatural ways (such as using wireless devices). These technologies seem to undermine trust in one’s own senses and in one’s connections to others, with the youngest brains bearing the greatest hurt.
Second, computer science understands information flowing in the real world. Numerical simulations faithfully replicate the laws of physics — of combustion, explosions, weather and gravitation — inside computers, thereby confirming we understand how nature works. Autonomous vehicles such as ocean gliders, autonomous drones, self-driving cars and walking robots, select and process signals from the outside to make trustworthy models, in order to move through the world. This neutral, technological understanding might illuminate the information flows that mature humans also use to do those same things and which growing humans use to learn how to do them.
Third, the science of epidemiology understands the information flows of medical research. Research has discovered and countered countless dangerous chemical and biological influences through concepts like clinical trials, randomization, viral spread, dose-response curves and false positive/negative risks. These potent yet neutral medical lenses might identify the most damaging aspects of artificial sensory interactions, in preparation for countering them in the same way they have already done for lead, tar, nicotine, sugar, endocrine disruptors and so on. The specific approach will extend the existing understanding of micro-toxins and micro-injuries to include the new micro-deceptions and micro-behavioral manipulations that undermine trust.
Fourth, the mathematics of management and communication understands the information flows of businesses. The economic spreadsheets and prediction models that presently micromanage business and market decisions worldwide can, when provided with these new metrics of human health and damage, calculate two new things. First, the most cost-effective ways to prevent and reduce damage. Second, such spreadsheets can quantify the degree to which well-accepted and legal practices of monetized influence — advertising, branding, lobbying, incentivizing, media campaigns and even threats — potentially make the information they touch untrustworthy and thereby undermine human trust.
America has risen to great challenges before. At its inception, even before Alexis De Tocqueville praised the American communitarian can-do spirit, this country gathered its most brilliant thinkers in a Constitutional Convention. In war, it gathered them to invent and create a monster weapon. In peace, it gathered them to land on the Moon. Over time, Americans have understood and made inroads against lead poisoning, ozone destruction, polluted water, smog, acid rain, nicotine and trans-fats. Now, we need to assemble our clearest thinkers to combat the deepest damage of all: the damage to how we talk and think.
Finally, we humans are spiritual and soulful beings. Our experiences and affections could never be captured in data or equations, whether of calorie consumption, body temperature, chemical balance or information flow. But just as we use such equations to defend our bodies against hunger, hypothermia or vitamin deficiency, we might also use them to defend against confusion, mistrust and loneliness, without in the process finding our own real lives replaced or eclipsed. In fact, if the human nervous system and soul are indeed damaged when mathematically-synthesized inputs replace real ones, then they will be freed from that unreality and that damage only when we understand which inputs help and hurt us most.
The Defend Act tasks its teams to treat the human nervous system as an information-processing system with the same quantitative, scientific neutrality as medicine already treats us as heat-generating, oxygen-consuming, blood-pumping, self-cleaning systems. Specifically, teams are to examine human informational processing in the same computational terms used for self-driving vehicles that are also self-training and to examine our informational environments, whether man-made or God-made, in the same terms used for the “training data” consumed by such artificial foraging machines.
An informational threat such as the present one must be met in new ways. In particular, the current threat differs from historic ones by undermining communication itself, making unbiased discussion of the problem nearly impossible in public or in subsidized scientific discourse. Thus, the first concern of the Defend Act is to insulate the process of scientific discovery from the institutional, traditional and commercial pressures that might otherwise contaminate its answers. Thus, the act aims to maximize scientific reliability and minimize commercial, traditional and political interference as follows.
The investigation will proceed not by a single dream team of famous, respected and politically-vetted experts but by 10 separate teams of anonymous polymaths, living and working together in undisclosed locations, assembled from international scientists under international auspices; for example, the American Centers for Disease Control and Prevention will collaborate with the World Health Organization.
Each team will be tasked with producing its best version of the long-term scientific truth, that is of the same truth each other team ought to also obtain based on accepted universal principles. Teams pursuing actual scientific coherence thus ought to converge in their answers. Any team tempted to replace the law of nature with incentivized convenience would then find its results laughably out of step with the common, coherent consensus reported by the other teams.
Choosing individual team members for intellectual flexibility and independence, rather than for fame or institutional influence, will ensure they can grasp the scope of the problem, articulate it fearlessly and transmit in their results no latent bias toward their home colleagues, institution, technology or discipline.
Each team will contain at least two experts from each of the three information-science fields, each able to approximately understand the technical language of the others and thus collectively to understand all aspects of human informational functionality and dysfunctionality. To ensure the conclusions apply to humans everywhere, at least one-third of each team will consider themselves culturally non-American.
Each team will operate according to the best practices of deliberative decision-making, such as those used by “deliberative democracy”: live nearby, meet in person a few hours a day over months in a quiet place and enjoy access to whatever experts and sources of information they choose to use. Their budget (about $4 million per team) will be sufficient for each to produce its report in one year, through a variety of public-facing communications media: written reports, slide decks, video recordings, private meetings and public speeches. Between the multiple team members, multiple teams and multiple media, it will be difficult for entrenched powers to downplay inconvenient truths.
Released simultaneously, all public reports will cover four topics with a broad brush:
1. Summarizing the informational distractions and damage one would expect in advance, based only on the mathematical principles of autonomous navigation mentioned above, including not only sensory distractions but also the cognitive load of attending to interruptions and following rules, including rules intended to improve the situation.
2. Summarizing, as meta-studies, the general (and generally true) conclusions of scientifically reputable experimental studies and separately the general (and generally misleading) conclusions of incentivized studies.
3. Providing guideline formulae of damage and therapy, based on straightforward technical metrics of each specific information source such as timing delay, timing uncertainty, statistical pattern, information format, etc., with which to predict the nature, timescale, duration and severity of informational damage or recuperation from it.
4. Providing guidelines for dissemination, discussion and regulatory approaches most likely not to be undermined by pressures toward the status quo.
Within two years of passing this act, for under $100 million dollars, the world will understand far better the human stakes of artificial input, and the best means for making our children safe from it again.
*[The articles in this column present a set of permanent scientific truths that interlock like jigsaw pieces. They span physics, technology, economics, media, neuroscience, bodies, brains and minds, as quantified by the mathematics of information flow through space and time. Together, they promote the neurosafe agenda: That human interactions with technology do not harm either the nervous system’s function, nor its interests, as measured by neuromechanical trust.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy.