In the previous blog post I showed that Levinas’ ethical approach, with its emphasis on infinite responsibility toward the vulnerable Other, provides a fitting framework for instances in which we take, or should take, responsibility for other individuals. As a result, in these instances a ban on AI-generated recommendations applies.
The situation is different for other stakeholders in the FIMI domain: non-domestic citizens on the supply side of FIMI, domestic citizens tied up in their total roles, and entities like organizations and bots. For different reasons we do not have to or cannot accept responsibility for them. For them, we need to find a different ethical framework and establish whether this framework allows for AI-generated recommendations.
Levinas
According to me, ideally a spectrum of ethical frames is needed rather than a binary system, as responsibility exists in degrees, shaped by factors such as relational proximity, capacity to help, causality, and competing duties, making a yes-or-no approach insufficient.
Levinas could be on one hand of the spectrum. Beyond Levinas, lies self-dissolution. Examples of what can be found beyond are: extreme asceticism or martyrdom as a metaphysical erasure of identity, universal dissolution rendering any self-interest impossible, and the vision of survival as a moral failing, advocating total sacrifice. Levinas’ thinking represents one outer pole beyond which no system can function practically. While Levinas maintains a tension — our duty to the Other doesn’t fully erase the self or our self-interest - outliers dissolve that tension, pushing radical empathy into suicide or withering away.
The other pole
On the other pole we find self-preservation and group loyalty. Taken to its extreme, we find absolute self-Interest, complete rejection of empathy, hyper-tribalism and nihilism. Examples for this complex of positions are Ayn Rand’s “virtue of selfishness” if taken to its purest form — every individual a sovereign, cooperation only transactional, and empathy a flaw – vulgarized versions of Nietzsche’s thinking, and vulgarized Darwinism. But, as with the outliers at Levinas’ pole, these solutions leave no space for a system to function practically. My suggestion for the last practically feasible position at this pole would be the perspective of Thomas Hobbes. To be honest, I looked for a more contemporary thinker. I tried Gad Saad, who is endorsed publicly by Elon Musk. But I found his book The Parasitic Mind lacking intellectual weight, so I fell back on a classic: Leviathan, originally published in 1651.
A bit on Hobbes
Hobbes writes in Leviathan that humans possess boundless natural passions. These passions drive us to perpetually want things, to non-stop strive to obtain objects of our desire, and to assure our access to these objects in the now and in the future. Whether it is about “riches, honour, command”, in the end it is about power, “a perpetual and restless desire for power after power”. The resulting competition among us “tends to produce quarrelling, enmity, and war”.
In Hobbes’ perspective we are all made equal, both in body and in mind, because even the weakest could kill the strongest and we all think we are wise. Therefore, we compete as equals sometimes for the same things and thus become distrustful enemies. “Because of this distrust amongst men, the most reasonable way for any man to make himself safe is to strike first, that is, by force or cunning subdue other men”. The result is “a war of every man again every man”. In this war “good” and “bad” do not exist, “in such a condition every man has a right to everything – even to someone else’s body”. The only existing law is “a command or general rule /.../ which forbids a man to do anything that is destructive of his life or takes away his means for preserving his life, and forbids him to omit anything by which he thinks his life can be best preserved”.
The only way to end this state of war is to establish a common power that protects all against all. To make this happen, all need “to confer all their power and strength on one man, or one assembly of men”.
The spectrum
The ethical framework spectrum I thus suggest to consider when dealing with FIMI actors thus looks like this: one the one hand, there is Levinas’ work stressing our ethical responsibility for the Other and on the other hand there is Hobbes’ writings that tells us that, when we are lacking an all-powerful sovereign, we are at war with any and all, and that we must do all to protect ourselves and may do all to acquire more for ourselves.
Although the unrestricted egoism that Hobbes ascribes to us as our natural condition does not seem practically functional, his idea of a social contract between individuals and sovereign does make him a suitable candidate for functioning as the antidote to Levinas. Beyond Hobbes, there would be only pure anarchy based on strength or tyranny without structure.
Hobbes as a fit
Could Hobbes’ writings serve as an ethical frame for the stakeholders in the FIMI domain that are incompatible with Levinas: non-domestic citizens on the supply side of FIMI, domestic citizens tied up in their total roles, and entities like organizations and bots?
Hobbes’ approach is practically applicable to the other stakeholders (“contextual viability”) under the precondition that we define our relation with them as a zero-sum game since we want the same power. In that case, our wins are their losses and our losses are their gains. Since our dominant definition of FIMI presupposes hostile and egoistic intent by the non-domestic suppliers of FIMI, Hobbes’ frame seems to fit well. And it would legitimize all of our actions towards others, including harmful ones: in order to win, they may harm us and we may harm them. They try to manipulate our citizens, so we try to manipulate their citizens. And vice versa. Thus, the reciprocity potential is high.
There are no ethical restrictions to this “war” since “good” and “bad” do not exist within this context. It is a dangerous approach to take though, since if we lose, our survival is at stake. That is why we should try to keep the war cold, rather than warm. The safest approach for both sides is to focus on easy targets and chip away the strength of the opponents, little victory by little victory (“scale of impact”).
The challenge of Hobbes’ approach is linked to our moral intuition and our self-consistency. Is this who we want to be? And do we accept a war of an “us” against a “them”, even if a “them” is part of our society? Do we subscribe to the vision, not that we are morally better and therefore are fighting a justified war, but that we fight for survival with any means possible and without any moral compass?
The only moral highground we could take, following Hobbes, is that we, as a potential sovereign, try to subdue others for their own good. Following this logic, we would fight others only to try to force them into a social contract to end the war of all against all. This self-perspective would mean that we strive to be absolute rulers to enable all to live in peace. This perspective presupposes still a care for the other, but not out of altruism, ethical calling, or ultimately as a responsibility for ourselves, but as a means for self-preservation.
Domestic stakeholders
If we would apply the Hobbesian notion of a war of all against all to domestic FIMI enablers, be they citizens functioning as proxies, organisations facilitating FIMI or bots used to disseminate FIMI, then we would accept being in a state of domestic war with them. If, on the other hand, we would chose the Hobbesian highground, we would see them as egoistic others who need to be fully subordinated in order for all to live in domestic peace.
Hobbes and AI
In the Hobbesian view, AI would be a mere weapon in the war against others. AI would be an enhancement of our cunning, and therefore could provide us with an edge over all others. Its recommendations would not be ethically good or bad, just useful. Within the Hobbesian perspective, no restrictions are needed regarding the use of AI aimed at the other FIMI stakeholders.
The only limitation that would apply for AI, even in a Hobbesian reality, is that AI recommendations may cause a ripple effect for those for whom we do accept a Levinasian responsibility. Thus, we would still need human oversight over AI to assess the potential consequences of its recommendations for those at the FIMI demand side.
Should we apply Hobbes
The question whether we embrace the ethical perspective of Hobbes is a political choice. In the next blog posts I will discuss two mainstream political choices on how to deal with FIMI - a human rights approach and a transactional approach - and I will try to place them on the Levinas-Hobbes scale.

