Human decision-making about morality can be terrible. We do not need to look far to see that. The Ukraine war is human decision-making at its worst. It is relentlessly cruel and common in human history. The bloody boot prints of the 1936 Hitler invasion of Czechoslovakia are well remembered in Europe and clearly seen in the Russian invasion of Ukraine. On the day the Nazis first marched outside their country, the market foreboding took the Dow down 3.3 percent from what was already at its Great Depression levels. Would computer models have been worse? Could AI have been a moderating force for moral guidance?
We have uniformly struggled to impose our view of the moral order on machine applications. This has been particularly true regarding AI models. We have fought to keep biases from creeping into computer applications. Taken one at a time, this seems like a worthy effort, but is it? Do we actually make better decisions than those of machines?
Here is an example. One suggested rule is that we make machines follow our public behavior.
“Robotized public services need to adhere to ethical standards, similar to traditional public services.” 
Is this really wise? I believe casting aside machine logic for human decisions should be thought through very carefully.
War is the ultimate test. The dark nature of humans is everywhere present in Shakespeare’s Hamlet and just as clearly on the front pages of our newspapers now. Nor do humans seem to learn very much about morality from war. The Russians despised Nazi behavior in the Battle for Stalingrad but the experience seems to constrain them little in the indiscriminate killings and horrors in Ukraine cities. War is an ultimate irrationality. Would a machine have recommended it? It could not have done worse.
Efforts to restrict AI to meet human standards should be understood as limiting a technology whose standards may well be above ours.
AI rationality beyond wars
Investment in AI should accelerate to help us manage our own irrationality and not just for physical wars. AI may be especially helpful in mental health. This area has been negatively impacted by the Covid experience and more specifically as the results of Covid brain infections begin to become clear.
The war with Covid is still living with us. And for the millions of people who have recovered from the first infection, the effects have sadly been subtle and severe. There is growing evidence that Covid is linked to abnormalities in the brain. The consequences are, of course, unknown currently but the potential impact on human decision-making is a concern.
The field of AI and morals goes both ways. Our focus on constraining AI decisions to conform to our sense of morality is only one side of the equation. We need to develop the emerging field of AI ethics as a standard to use against our morality. We may not enjoy the results of the comparison. 
The new field of AI morality has broad-based applications and makes the field even more important sociologically and economically. There is a huge pool and a potential need for AI to assist human judgement in healthcare. This is especially true for people who were already under medical care for pre-existing conditions.
Covid has been a selective predator. Not only has Covid targeted the old and physically compromised with pre-existing conditions, but it has also been targeting the mentally compromised. The National Institutes of Health (NIH) said:
Emerging data also indicate that people with schizophrenia and other serious mental illnesses have also been hard hit by the pandemic. Individuals with schizophrenia, for instance, are nearly 10 times more likely to contract COVID-19 and are nearly three times more likely to die from it if they do fall ill, compared with individuals who do not have a mental illness. Finally, deaths due to opioid overdose rose substantially in the context of the pandemic. These data remind us that we need to work hard to address long-standing disparities and ensure access to life-saving medical and psychiatric care is available for all Americans.—One Year In: COVID-19 and Mental Health, National Institute of Mental Health
The effect of Covid on mental health may even be linked to the current war in Ukraine. We must wonder what role it has played. One specific person who has become a concern is Russian President Vladimir Putin. People have speculated that Putin’s behavior suggests that Covid may have had an impact on him.
The investment implications of AI have been focused on its business and industrial applications, but AI’s potential may be much larger. AI may evolve to a personal assistant in human judgement with wars and mental illness. It very well could save us and millions of others by creating an entire line of mental health applications, especially in conditions of extreme stress like wars.
The future of AI is even larger than currently appreciated. It will grow to advise as well as follow human values in yet to be developed moral, personal assistance applications.
 Jurgen Willems, Lisa Schmidthuber, Dominik Vogel, Falk Ebinger, Dieter Vanderelst, Ethics of robotized public services: The role of robot design and its actions, 2022. Government Information Quarterly, 101683, ISSN 0740-624X, https://doi.org/10.1016/j.giq.2022.101683. (https://www.sciencedirect.com/science/article/pii/S0740624X22000168)
 Pereira, Luís M., The A. Han, and António B. Lopes. 2022. Employing AI to Better Understand Our Morals, Entropy 24, no. 1: 10. https://doi.org/10.3390/e24010010
We hope you enjoyed this article. Please give us your feedback.
This article is not intended as investment, tax, or financial advice. Contact a licensed professional for advice concerning any specific situation.