Build Wise Systems: Combining Competence, Alignment, and Robustness
Understanding decision-making systems and what that means for governance, corporations, and technology creators.
This is the first piece in a set of work on ‘Reimagining Technology’;
Most of my prior public work focused on misinformation, online platforms, platform/AI governance, and the impacts of AI/ML. This may look somewhat tangential—but is actually deeply connected. Later pieces can be found here.
We will not be able to address our urgent global crises without improving our systems and processes for decision-making and conflict resolution—our decision-systems. Improving these systems is also crucial for ensuring that both state and non-state powers incorporate human values; from governments, to corporations, to our technological creations.
Both of these challenges relate to what I call the decision-system problem:
How can we ensure that a decision-making system is competent at identifying and evaluating options, aligned across conflicting values, and robust to the complexity of the world?
Decision-systems are omnipresent. Whether a decision-system is composed of people—like a democratic legislature passing a budget once per year, or composed of code—like a tech company recommendation engine that makes billions of ranking decisions per hour, we want decisions to be—at minimum—competent, aligned, and robust.
In other words, we need to create systems that are wise.
Building wise systems may involve integrating lenses and tools from disparate domains across human and non-human decision-systems—from deliberative democracy to applied psychology, from conflict mediation to reinforcement learning. The first step toward that integration is seeing these fields as explorations of variants of the same pattern—the decision-system—and building a shared language around the underlying goals these systems are striving for.
Core Properties of a ‘Wise’ Decision-System
So what does a wise decision-system look like? Below I break down the core properties from the decision-system problem above. As you read, pick a democratic government, powerful corporation, or technological creation, and consider how well it does at each property—and the impact on our lives when it does not do well.
Competence at identifying and evaluating options: A decision-making system must have a mechanism by which its choices are made, not only about “the decisions” themselves, but also about many other “micro-options” along the way, including for example what the system should focus on (e.g. agenda setting, attention allocation). Competence involves allocating resources well, using the best available information, effectively exploring the space of options, handling uncertainty well, and applying coherent logic.
That said, this “option evaluation” doesn’t need the core decision-making system itself to be amazing at logic or to have deep domain expertise. It just needs sufficient competence to be effectively supported by other systems. When necessary and possible, a decision-making system should be able to request additional information in order to improve its understanding. For example, in Congress, this looks like the CBO, CRS, staffer research, consultation with external experts, etc. for information about the impact of bills.
Alignment across conflicting values: We want our political and technological systems to treat us like people, with inalienable human rights. But beyond that, we want our systems to care about our individual and very personal values and goals, not those of ‘generic abstract average people’—people who may have very different lives and perspectives.
This is clearly hard—even to understand what we truly want—to elicit values. Sometimes there is no good solution, so navigating value tradeoffs and managing the influence of stakeholders is a key part of decision-making processes. The best processes tend to involve moving from shallow values to deeper underlying values—those which are most likely to be shared instead of conflicting. Many decision-system systems primarily use some form of power as a proxy to resolve conflicts. More democratic systems aim to allocate power in a more representative fashion, so that those impacted by a decision have power over it.
Robustness to real-world complexity, including adversarial forces: Decision-systems can face challenges such as complexity creep, time constraints, resource constraints, emergent feedback loops, gaming, and adversaries who intentionally want to subvert or destroy the system. In addition, in the human world, factors such as power dynamics, relationship capital, emotions, autonomic reactions, and incentive structures will inevitably play a significant (and potentially valuable) role in decision-making. Even largely non-human decision-systems—like the recommendation engines which decide which Facebook posts you see—interact implicitly and intimately with the messy human world of emotion and power.
Decision-making systems should be as robust as possible to all the factors and threats which are likely to derail them, both internal and external (while remaining as adaptive as is appropriate to their context). They also must be able ‘accept’ the messiness of real-world complexity, and the imperfections in practice and action that such messiness entails. Discerning potentially derailing forces and choosing how to mitigate or accept them requires a form of introspection and continuous mindful awareness — and itself involves decisions. (While robustness is inextricably linked to competence and alignment, I have found it useful to explicitly call it out as a complementary set of considerations and capacities when designing systems.)
Within and beyond those core properties, decision-making systems must not be too slow or expensive and either need tight scoping or the capacity to evolve.
Moreover, while these core properties are useful for evaluating the properties of the decision-system itself; it’s also often extremely important to consider properties of the world outside of the system — particularly the legitimacy and trust of the system. Both human decision-system institutions and technological products can be “decommissioned” (or are simply made less effective) if they lose their legitimacy or trust — regardless of how competent and aligned they actually are. Human decision-systems in particular do not exist in isolation — they are part of the social fabric. Culture and habit, including assumptions, beliefs, ritual, values, and ingrained practices, can significantly impact and be impacted by a decision-system.
Why care?
Creating a positive future for a world of 8+ billion people may require us to go beyond the decision-making tools of the past. I believe we can do so in a way that is also fundamentally democratic—because we already have the building blocks. We need to carefully put them together, combining the bright spots, best ideas, and tools of many domains, while never forgetting the messy rich complexity of the world these systems will be embedded within. Naming and describing the core properties of wise systems—competence, alignment, and robustness—gives us a compass and a direction to aim for. 🧭
Update: This work has continued at Harvard’s Technology and Public Purpose Program, with followups including a complementary compass for social technologies that focuses on what happens outside of the decision-system boundary—and specific approaches to improve key compass components including Platform Democracy and Bridging-Based Ranking. The most recent pieces can be found here.
Many thanks to Sharon Zhou, Tina White, Joe Edelman, Cathy Wu, and Jason Benn for conversations and thoughts that helped inform this piece. This is also a working document. These ideas and frames are still evolving; please feel free to reach out for feedback, especially if you feel something crucial is missing.
If you use and apply these frames, I would also love to hear about it. Reach out here, or via twitter.