An exercise in machine ethics, modeling and simulation, decision analysis, moral reasoning, machine learning, and philosophy of science
Abstract
As the embodiment of rational cause-and-effect, game and decision theory dominated the second half of the 20th century and continue to flourish today. Machine ethics, a very nascent field, involves developing machines (either as tangible hardware or as mathematical or logical models) with ethics codified as principles and procedures, in turn allowing them to consider moral “cause-and-effect” of potential actions.
A model known as the Metric of Evil (the “Metric”) was first conceived by a branch of the United States Army, primarily intended for use by the Army itself. The Metric was inspired by a perceived gap in military course of action analysis: ethical dilemmas arising from the shift from conventional soldier-to-soldier combat to modern asymmetrical warfare. The Metric compared and suggested courses of action by incorporating their tangible, concrete, direct consequences – such as the expected number of international treaties broken, facilities destroyed, and combatant and civilian casualties expected to be caused by each action.
The Army consulted a team of researchers at the University of Alabama in Huntsville, led by the author of this dissertation, to refine the Metric so that it would simulate the “behavior” of ethics and military experts in evaluating courses of action. The Metric’s evaluation was reduced to a single consequence – whether or not civilian casualties were involved. Using this single consequence, the Metric was able to match expert assessments. Thus, results were excellent “on paper”; however, intuition indicated that this did not meaningfully capture how ethical assessments are made.
This research involves the development of an alternate approach – the Relative Ethical Violation (REV) model. This model evaluates potential actions based upon the principles they may violate rather than the tangible consequences that they may cause. In developing the model, the author first conducted an extended review of the literature, which provided insight on ethics and psychological factors, model design and validation, and solicitation of information via survey. Then, he carefully chose a potentially meaningful set of ethical principles as input to the model. Finally, he designed and implemented the REV, the survey process through which expert assessments would be collected, and the process of validating and calibrating both the REV and the Metric so that both approaches could be compared.
Ultimately, this research found that human raters, including experts, disagreed greatly amongst themselves, which complicated the process of calibrating the model. However, amid this disagreement emerged several meaningful results. First, the REV outperformed a re-calibrated Metric, the Metric outperformed experts, experts outperformed non-experts, and non-experts outperformed simple random selection of actions. Second, human raters tended to value some principles over others; that is, no given ethical principle – even “civilian non-maleficence” – completely overshadowed the others. Third, there was a clear difference between how military experts, humanities experts, and non-experts assessed ethical dilemmas and valued certain principles. Collectively, these results indicate that the principles-based approach behind the REV can provide a clearer ethical picture than can a checklist of tangible consequences and that such an approach can provide ethical support for decision-making, and that aspects of this research can contribute to machine ethics, decision analysis, and modeling and simulation.
The story
The Relative Ethical Violation model was the subject of my dissertation, a decision support tool that captured how stakeholders in military decisions (including commanders, soldiers, and others with a direct or indirect military background) would make decisions in realistic, high-stakes ethical dilemmas. My committee was highly interdisciplinary: it included two faculty members from the College of Science, one from the College of Business, and two from the College of Arts, Humanities, & Social Sciences.
This research reexamined another similar model that a team of researchers (managed by me) had developed previously from a small amount of funding and a core idea from the Systems Simulation and Development Directorate (SSDD, now S3I).
In that previous research, we had developed a complex mathematical formulation, the “Metric of Evil,” to capture military stakeholders’ assessments of the “lesser of two evils” for Course of Action analysis. To create an objective measure, it compared actions by quantitative and verifiable information—a long list that included the number of civilian casualties, number of treaties violated, number of enemy combatant casualties, and number of historical landmarks destroyed—with weights that could be calibrated based upon stakeholder assessments. It included concepts inspired by behavioral economics, such as diminishing marginal “utility” to describe the impact of the magnitude of these consequences. These factors were also designed to be calibrated based upon data from stakeholders. When calibrated to survey data, however, the one deciding factor was whether or not civilian casualties were involved. It successfully captured patterns in stakeholder assessments, but the complexity of the Metric was subject to “cross-cancellation” and therefore served no tangible purpose.
The result of the research, however, was intuitive: actions that resulted in innocent civilian casualties are considered to be more “evil” than those that did not. However, the result also led us to reevaluate our approach. We realized that attempting to model “evil” was distracting from the goal of the research, which was to develop a decision support model that captured stakeholders’ ethical perspectives and helped guide decisions from an ethical standpoint. Further, since the Metric relied upon quantitative information but consideration was reduced to one quantity, we wanted to investigate from a different perspective: a model based upon violation of principles rather than quantitative information.
I decided to work toward a much simpler approach: a linear combination of weighted factors. I dedicated a significant amount of time and effort, however, toward determining what those factors ought to be. After consulting literature, I developed a method to select appropriate ethical principles. Several other adjustments were made to the approach as well, including more rigorous statistics and careful treatment of hypothetical courses of action. I had also developed a custom web survey application, using PHP and HTML, to collect this data in an SQL database for later processing. The application was intended to mitigate biases wherever possible, randomizing the order of scenarios that presented actions to compare, the names of actors involved in the scenarios (which were themselves signified as colors), and the order of the actions within each scenario. It was also informed by research on experimental design, survey design, and human factors.
When calibrated, civilian non-maleficence (comparable to civilian casualties) was not the only deciding factor; other principles were taken into consideration by the stakeholders in their assessments as well. The model also agreed with stakeholders more than the stakeholders agreed with themselves—that is, through the process of calibration to the data collected from stakeholders, the model captured hidden patterns in their thought processes.
Publications and media
- Reed, Gregory S., et al. “A principles-based model of ethical considerations in military decision making.” The Journal of Defense Modeling and Simulation 13.2 (2016): 195-211.
- Reed, Gregory S., and Nicholaos Jones. “Toward modeling and automating ethical decision making: design, implementation, limitations, and responsibilities.” Topoi 32.2 (2013): 237-250.
- Reed, Gregory S., et al. “A Model of ‘Evil’ for Military Course of Action Analysis” Military Operations Research 18.4 (2013):61-76.
- “Reed earns first UAH modeling and simulation program doctorate.” The University of Alabama in Huntsville. N.p., 01 Nov. 2013.