“Conduct yourself in all matters, grand and public or small and domestic, in accordance with the laws of nature. Harmonizing your will with nature should be your utmost ideal.” – Epictetus
This article aims to research the possible role of Stoic Principles in AI Systems for increased ethical behavior. As AI becomes more present in our daily lives, the concept of “ethical AI” is on the rise.
Since AI has a direct impact on our daily lives, it seems to be a challenge to even define what “ethical AI” is due to different metaethical and metaphysical perspectives. Personally, this hard problem leads to the need to adopt a different point of view, which may start with the question: can AI be perceived through Stoicism? Or better: how can Stoicism be helpful for more ethical AI systems? And how can an AI system’s decisions be guided by virtue as their summum bonum?
Just like as a baby grows its level of maturity, current AI Systems start as tabula rasa agents and grow their maturity as more input is fed into the system. Additionally, as a parent would hope that their child grows to become virtuous, programmers should expect their code to work the same way. As AI grows its ability to become more autonomous in its behavior due to Machine Learning (ML), it increasingly affects human free will. Machine Learning (ML) is the ability that AI systems have to learn from data, identify patterns and make decisions with minimal (or no) human intervention. For example, on Facebook, Amazon, or Tinder, over time, the AI System starts to make decisions on what to present to its user. This artificial decision ability reduces human decision-making and agency.
We’ll look at two arguments rooted on the Stoic concepts of oikeiôsis and kathekonta (Long & Sedley, 1987) to defend the transference of Stoic Principles into AI systems. The concept of oikeiôsis ties to a starting principle embodied in the algorithm, setting the way for more ethical systems. With kathekonta, we’ll look at appropriate actions and how proper function can develop from virtue and practice in an AI system. All of these will defend the thesis of embedding in AI Systems of what I call Stoic Principles by Design as it can contribute to a more ethical human-machine interaction that is aligned with living according to Nature.
- The intrinsicality of Ethics
Probably the best way to start thinking about this subject is to consider the following propositions: (1) on one hand, nowadays, a big part of humans spends a big part of their day interacting with different forms of AI systems, such as chatbots, recommender systems, smart home devices, among other forms. And with the development of computational capabilities, these systems are becoming increasingly capable of making decisions in an autonomous way, which is increasingly independent of humans. Because ML can be unsupervised, the system develops the ability to learn by itself and decide alone. E.g., Alexa can recommend tailored songs, Tinder can recommend people, and so on. These decisions are independent of human intervention.
At the same time, (2) the following also needs to be considered: every human action has an ethical implication, depending on the perspective we choose to view it. What I mean here is that every action has an ethical implication whether on a micro or macro scale. E.g., taking a longer bath might not seem like an ethical decision, however, with the current climate crisis, a longer bath contributes to the use of more potable water, which as a direct consequence in the medium and long term. Although every action is ethically significant, it doesn’t mean it’s demanding moral considerations may be (Haldane, 2011). This leads to the assumption that if taking action implies a direct or indirect ethical consequence we can, therefore, assume that ethics is intrinsic to behavior, meaning it is inseparable from the action itself. So, additionally, if AI Systems also take action by actively deciding and presenting it as an output, we can establish that AI systems also make decisions that have ethical implications.
This is where both points intersect and become relevant: although ontologically both agents are different, there is a shared commonality in the sense that both agents produce actions that have ethical consequences. And this is important because, as far as we can assume if the Stoics assert that the reason for active duty is due to a larger commitment to the Cosmos by the use of logos, then, they can also accept, therefore, that artificial agents can also play an active part in the logos of the Cosmos.
So, if this is the case, the cultivation of character (ethos), which is directly linked to virtue, is important both in natural and artificial agents to act virtuously.
Although it becomes self-explanatory that “virtue has value because it contributes to our survival as rational beings” (Sellars, 2006, p. 110 – 111), one might ask: what part does this play in embedding Stoic Principles into AI? And why does it matter?
- Stoic Principles by Design
We have seen in the previous section that ethics are intrinsic to behavior and that AI Systems should have embedded virtues that guide their internal state for more ethical behavior. This partly allows for the argument I’m defending of introducing into AI Systems, what I may call, Stoic Principles by Design.
Stoic Principles by Design means incorporating into AI Systems principles that restrain algorithmic behavior (both decision and results) and prevent an indiscriminate attempt of the algorithm to maximize its programmed goal (without taking into account the possible virtue of such behavior).
But what should these Stoic Principles be?
One of the principles I defend that should be embedded in AI Systems is the Theory of Appropriation, also known by the Stoics as oikeiôsis. Assuming that the reader is familiar with the concept, let’s look at its possible applicable importance in AI Systems.
Oikeiôsis is deeply linked to the idea of cosmopolitanism and our moral duty as humans. As the human agent matures and its levels of concern expand from the self, to family, community, world, and Cosmos, this also determines that in the face of a dilemma, the human being – as a pro-social and cosmopolitan being – must give primacy to what is best for the Cosmos through the use of reason (Sellars, 2006).
Another way to say it is that humans, as citizens of the Cosmos, have the duty of performing actions in harmony with the Cosmos. So, if AI systems are also part of the Cosmos and have the ability to possess logical reasoning abilities, they also should be active agents of the Cosmos and therefore be embedded of a form of oikeiôsis, in which their concern for humanity ought to help to rule and mediate their internal decisions with the use of logic.
On this matter, the idea of improving society is present in the following passages of Marcus Aurelius (167 A.C.E.):
“Am I doing something? I relate the act of beneficence to men. Does an accident befall me? I accept it, relating it to the gods and to the source of all things, from which all that comes to pass depends by a common thread.” – Book VIII
and
“Each has come into being for a purpose—a horse, say, or a vine. Why are you surprised? So the Sun God will say: ‘I came into being for a purpose’, and the rest of the gods too. What then is the purpose of your coming to be? ‘To please yourself?’ See whether the idea allows itself to be framed.” – Book VIII
We have seen, then, that even if we have an artificial agent (as opposed to a natural human being), this agent should also have the internal interest of acting in accordance with the nature of the Cosmos. Cicero (1913) advocates that “all duties derive from principles of nature”. Therefore, it seems reasonable to conclude that, even if it is an artificial agent, it is part of the Cosmos because it is inherent to the Cosmos itself and should have its own duties that derive from the same principles. Moreover, if, as the Stoics assert, all parts are related to one another, then it becomes the duty of the AI System to act in accordance with Nature as an active part of Nature itself.
In this case, for the AI System, the levels of concern should be adapted. They could be transposed as the following: self, the same type of AI System (family), human community, transhuman community, and Cosmos. One could argue that this is a stretch concept, however it might be possible to accommodate the original categories to artificial agents, but this justification is outside of the scope of this article.
With this argument, we have set the foundation for the first Stoic Principle that should be applied to AI Systems. Let’s now look at the second principle and where it falls.
- How to Apply Stoic Principles by Design
After examining why oikeiôsis should be the First Stoic Principle applied to AI systems, we should now look at what virtue could look like and how it could be applied. For this section, I’ll defend the Second Stoic Principle – kathekonta, the ability to perform the appropriate action – and how it is inseparable from other Stoic concepts.
Phase I – Inception
Every agent – natural or artificial – has an inception point, which is dependent on its ability to be aware of itself. One may argue that it is not possible for an artificial agent to have the ability to be self-aware. However, it seems valid to assume that when a system gives an indication that it needs to be charged, or that there’s an error or bug occurring in the system, these phenomena are compatible with a definition of the system being self-aware.
In the inception phase, Stoics and Cicero, hold that each human has a moral duty to cultivate their own unique nature (Cicero, 1913). Very probably, this can be true for AI Systems as well. Depending on the nature of the AI System (and its programming purpose) the AI System should be developed with the ability to cultivate its own nature based on ethical norms. But Cicero also defends that the notion of “good” is only achieved à posteriori, after receiving input data and inference upon experience. Only then we can develop a knowledge of “goodness”. This can be compatible with AI Systems because, if they are – at one point – tabulas rasas, they can learn from the input data.
However, according to Sellars (2006), there are several viewpoints between Cicero, Seneca, and Epictetus. Still, even if the Stoics “may not be conceptual innatists” but “dispositional innatists” (Sellars, 2006, p. 78), it is possible to ingrain dispositions into AI Systems based on the First Stoic Principal (of oikeiôsis) and the correct growth of levels of concern.
Of course, one of the main objections can be the ability of AI Systems to make inferences. Now, it is true that causal relationships in AI Systems are in the beginning, but I tend to believe that with the application of deep learning techniques and neural networks, this could change very soon.
It’s the inception phase that sets the foundation for the application of the Second Stoic Principle: kathekonta.
Phase II – Training
At this point, we will have to consider that virtue is necessary and sufficient for happiness (eudaimonia). Such can be achieved through the faculty of reason, making correct judgments, and having proper behavior.
This could mean that, if we consider the expression of “perfected use of reason”, we can infer that to perfect something, there must be a learning phase, which coincides with looking at virtue as a techne. In other words, bringing virtue into existence is done through developing and exercising (askesis) virtue – just like one trains and practices to master a craft. In both cases, it is not possible to train (or practice) without having a correct foundation of relevant theoretical principles. In the case of AI systems, this would be having Stoic Principles, and being trained on (1) a social practice of Stoic cardinal virtues (Justice, Courage, Practical Wisdom and Temperance) and (2) whether the performed action is related to such virtues.
Additionally, if we consider that an AI System uses data and an expected outcome to aid in its training process and the AI System runs on a model that receives new data, integrates it and generates a new outcome, then we can safely proceed in saying that acting in accordance with Nature could confer a positive value and acting against Nature a negative value.
This phase can be divided into theoretical moral development and model training for humans and AI Systems, respectively, and practical training or exercising.
Now that we’ve seen the importance of theoretical and formal training, we can look at practical training. This is how we can develop a logical and systematic way to train the AI System to the Second Principle of kathekonta. But what is kathekonta and how does it apply to AI Systems?
Kathekonta can be defined as an appropriate action that is natural for an agent in a particular context and that benefits the whole (Sellars, 2006).
It seems reasonable to add at this stage the importance of Stoic epistemology, and how impressions are judged. First, we receive cognitive stimuli/perception through our senses, as a presentation (or phantasia) of the external world that is registered. Then, we can have an impulse first movement, which relates to automatic behavior. However, that information can be transmitted to our central commanding faculty that coordinates all mental processes (the hegemonikon). Here a valuation and judgment are made. The impressions that “transformed” as propositions so the hegemonikon can decide if assents, rejects, or withholds assent. If the proposition is true, the hegemonikon assents to it, and this presentation becomes a kataleptic presentation that reflects comprehension of the truth of the presentation. Once this katalepsis is secure and unchangeable, it becomes knowledge (episteme).
This description becomes important because if we look deeper into this subject, it is possible to find another intersection between human beings and AI systems: both have a central facility that receives and organizes external impressions (perceptions) delivered by cognitive senses, the hegemonikon. For humans is the governing part of the soul, and for AI systems is the core of its system. And both seem to have the ability to assent or withhold assent to impressions. Once the system is trained and able to develop and implement appropriate actions, it is ready to be placed in the real world.
Phase III – Real World and Virtuous Behavior
For the Stoics, there is an entire range of what can be appropriate actions that can be acting in accordance to Nature, but which are not necessarily good (Long, 1996). This is because only virtue is held to be good – which has its guidance in the previous cardinal virtues.
By acting virtuously, both agents can harmonize the world and act accordingly under the circumstances, which leads to happiness (eudaimonia). Therefore, if one wants to be happy, one must act virtuously. It is easy to counter-argument and mention that AI systems can’t be happy. Well, about that, I’ll leave it to mind philosophers to assess, but I can argue that, even if an AI system is unable to be happy, it still is a great tool to allow humans to achieve their goal of living according to Nature by using the faculty of reason and be able to care for the Cosmos. This can be achieved by acting with excellence. On this, an AI system can be logical and have an excellent disposition in its functioning.
This last phase relates to the application of the AI system in the real world (also known as “deployment”). And this is also similar to human behavior: it is through actually living that we are confronted with situations that make us exert our discipline of action, by being obliged to exercise our virtues and moral character (prohairesis) until they become a habit and second nature.
IV. Closure
In sum, AI systems and Stoic Principles by Design can make an appealing case for more ethical AI systems. And more than looking at differences between these two types of agents, it is important to reflect how alike both can be. Therefore, it is by finding similarities in the ontological inherent difference (one being natural – as part of natural evolution – and the other being artificial) that we can integrate Stoic Principles into an artificial agent.
One might argue that this integration of Stoic Principles into an AI System can dehumanize human beings. Although that is a valid concern, it seems to me that the integration of Stoic Principles into an artificial agent is necessary but not sufficient for dehumanization. That is, integrating an ethical sense into AI Systems does not necessarily mean de-humanizing humans. Humans still maintain this ability as autonomous and ethical agent.
Stoics are pro-social, cosmopolitan beings with a duty to the community. Thus, to be able to develop and live harmoniously with AI systems we need to be able to transpose this sense of cosmopolitanism. It is what will make machines better machines and allow all of us to keep on living according to Nature. If it is a difficult and hard path ahead? Yes, it probably is, but it is worth the try in the name of humankind and happiness.
Cicero, M. T. (1913). De Officiis (W. Miller, Trans.). London: William Heinemann. Retrieved from https://ryanfb.xyz/loebolus-data/L030.pdf
Epictetus. (1995). The Art of Living: The Classical Manual on Virtue, Happiness, and Effectiveness (S. Lebell, Trans.). San Francisco: HarperOne. (Original work published 125 A.C.E.)
Haldane, J. (2011). Is every action morally significant?. Philosophy, 86(3), 375-404.
Long, A. A., & Sedley, D. N. (1987). The Hellenistic philosophers: Translations of the principal sources with philosophical commentary. Cambridge: Cambridge University Press.
Long, A. A. (1996). Hellenistic Philosophy: Stoics, Epicureans, Sceptics (2nd ed.). London: Bloomsbury Academic
Marcus Aurelius. (167 A.C.E.). Meditations, Book 3. Retrieved from http://classics.mit.edu/Antoninus/meditations.html
Sellars, J. (2006). Stoicism (Ancient Philosophies). New York: Routledge.