fbpx

Robotics and care relationships: an interview with Professor Sandro Spinsanti

While increasingly sophisticated technologies enter the healthcare field, it is necessary not only to answer ethical and social questions but also to understand possible evolutions in care, and furthermore what role Narrative Medicine will have.

We publish an interview with Professor Sandro Spinsanti, who held the chair of Medical Ethics at the Faculty of Medicine of the Catholic University (Rome) and of Bioethics at the University of Florence, as well as the role of Director of the Department of Human Sciences of the Fatebenefratelli Hospital – Isola Tiberina (Rome) and of the International Center for Family Studies (Milan). Spinsanti is the founder and director of the Giano Institute for Medical Humanities and Health Management. He founded and directed the Journal of Medical Humanities Janus. Medicine: culture, cultures.

Q. In the healthcare context, the progress in the field of robotics and Artificial Intelligence opens new scenarios, but at the same time poses ethical questions. In your opinion, which are the main ethical and social issues at stake?

SS. Robotics poses problems for the whole society, not only for healthcare. We know that we will increasingly live together with these technologies and also that they will allow us a better quality of life; this happens not only in healthcare but generally in social life. However, we need to establish some rules, especially for our security.

The great prophet of robotics, Isaac Asimov, formulated the three known laws in 1942 – the first establishing that the robot cannot harm human beings and the second that it must obey human orders. However, already in the famous Stanley Kubrick‘s film, 2001: A Space Odyssey, the robot rebelling against an order, and consequently must be deactivated, is dramatically represented. The “death” of the HAL 9000 robot is one of the most heartbreaking in all of Kubrick’s filmography. The disturbing element in this scenario is the awareness that Artificial Intelligence goes a step further than machines: the latter, becoming intelligent, acquire even greater autonomy; they are no longer simple tools but become almost interlocutors.

In particular – and this is one of the most crucial issues in roboethics – it is disturbing to think that, one day, robots will be able to (and must) take some decisions. The best-known example is the robot that drives a car and has to decide whether to invest or not invest a passer-by under certain conditions, to mow him/her down and save the driver, or to self-destruct. Moreover, before robots, these “moral reasoning” tests were placed in practical ethics books and courses. For example, if a train is out of control and is about to mow four people down, I can’t stop it, but I could divert it to another line on which there is only one person: is it ethical to sacrifice one person to save four?

Well, we now attribute to the robots this kind of moral reasoning underlying decisions, and we wonder with what criteria the robot will make these choices: will it follow a cold algorithm or will its decision be the result of reasoning based on social values and considerations?

Our confidence in the fact that human beings base on intelligence and values in their ethical choices, is quite naive: going more in-depth, we realize that our decisions are conditioned by elements that have little to do with rationality and principles, but depend on other variables.

From this perspective, an emblematic book is Human kindness and the smell of warm croissant, by Ruwen Ogien, which starts from a research made by putting panhandlers out of cafes in France and looking at how passers-by behave, giving or not giving alms. One of the most disturbing aspects is that people, passing in front of a cafe smelling of warm croissants, are more generous than those passing in front of a café without the same smell. So, is it kindness or – in this case – the sense of smell that determines our choices?

This example takes the beauty out of thinking our choices as rational and ethical: we rely on reasoning, we obey principles, and we think a robot would make its choices differently than we do. A little polemically, we could also say that decisions, entrusted to robots, could be made with less coarse criteria than those underlying ours…

The scenario is vast. Roboethics raises these and other questions, in particular regarding the fear we have that Artificial Intelligence will be more intelligent than a natural one (understood as what characterizes us), and above all that humanoids can prevail.

In the context of our coexistence with robots, the healthcare field is precisely where the future of robotics is most promising. We propose robots to drive cars, but we already include them in care relations, and they will be able to do several of the things usually done by a healthcare professional.

Q. While robotics and artificial intelligence enter the healthcare field, how roles and competencies of healthcare professional could be redefined, in the perspective of humanizing care?

SS. Generally speaking, the inclusion of robots in care is really promising. Not casually, the most progressive country in this field is Japan, which has a problem of ageing of the population and chronicity; but we will have it too, and in fact, we partially have it already. That is, older and not autonomous people will need more and more assistance, but an increasingly limited number of people will be able to guarantee it – because of the straitened circumstances of families, but also because of life-lengthening, and increased chronicity and disability.

In this scenario, it is reassuring to think that many of the tasks performed today by nurses could be carried out by robots. As who has been in a retirement home or similar structures knows, these tasks are burdensome and exhausting. A robot raising the patient can significantly help the nurse. There is another crucial element: care is exhausting also from a psychological point of view, as well as physical. To those afraid of including robots in healthcare, we might ask whether it is preferable a nurse who, after the bell has been ringing for the umpteenth time, moodily shows up accusing impatience and tiredness or a robot responding immediately to the need of assistance.

Perhaps entrusting robots with hard tasks of assistance could save time for more intimate, human and warm relationships with the professionals themselves. The nurse, to whom the robot has spared the fatigue of manoeuvring the patient’s inert body, could converse with the patient in a relaxed and gratifying manner. We do not have to choose; on the contrary, we could – and we should – learn to combine what machines can do and what only human being does.

What a robot, however sophisticated and intelligent, can never do, is to look in a person’s eyes: it remains a fundamental element in the care relationship.

Reflecting on computer science, one of the complaints of many patients is that today, during the visit, the doctor mostly looks at the computer screen, not into the patient’s eyes. Indeed, what Artificial Intelligence – in this case, the computer – provides us is fundamental to improve the level of medicine’s scientific offer. Let’s think that by now, no human intelligence can contain medical science: if we did not have technological support, we would find it more challenging to make diagnoses and prescriptions. However, what we need is to avoid the machine – the robot or the computer – to do the doctor’s work. Clinical reasoning remains fundamental, but above all, the relationship is essential. The look – being looked at, feeling looked at – is the first step in human relation.

I found on a webpage the story of an American doctor, who goes as a patient to a colleague for a clinical problem; he has a 12-minute visit, in which the doctor continually stares at the computer screen, without either looking at or visiting him. When he leaves, the patient goes to the secretary’s office because the doctor has prescribed him a check, after further tests. When the secretary asks him: “When does the doctor want to see you again?” the patient’s first thought is: “But when did she see me?”. This small anecdote gives an extreme concreteness to the fact that no computer, no robotic support, no humanoid can ever substitute the need we have for a relationship starting with looking at each other.

If the act of looking is disappearing, even among us humans, let’s think about what will increasingly happen in the interaction with the robot.

Q. In this scenario, which role can Narrative Medicine have?

SS. Narrative Medicine is oriented on the relation.

Care consists of two elements. I start with a sentence I found in the History of Medicine Museum in Padova, in the pharmacology section. Academics from that illustrious school, which firstly created a botanical garden, summarized their knowledge in a sentence: herbis non verbis medicamina fiunt, that is: herbs make medicines, not words. Presuming that medicine made much progress – from herbs we passed to latest generation biological medicines – we could update this supposed wisdom in pills make care, not words.

Narrative Medicine was also born as a reaction to this strong tendency to say that both pills and words make care. It doesn’t underestimate herbs (and medicines), but says that this is only a part of care: words are the other part.

In this, Narrative Medicine is not alone. Let’s think of how bioethics insisted on informed consent, that starts from the information. While we could do “pills medicine” without words, the medicine requesting the person’s participation, awareness, and active intervention cannot do without information and patient’s involvement to choices. Narrative Medicine follows this current, enhancing word – moreover, in a critical way, because if we want to be sure that the pill we are taking is safe and without dangerous side effects, similarly, Narrative Medicine needs to evaluate words exchanged in care, and it must be very rigorous.

Today, unfortunately, the development of informed consent has practically overshadowed the modality of information: not rarely patient is brutally given a prevision, a prognosis, maybe in statistical terms, without that essential conversation. Communication is not “throwing information” on who is listening; on the contrary, it is a process and requires to be accompanied.

The document produced in the 2014 Consensus Conference by the Italian Istituto Superiore di Sanità (Healthcare Superior Institute) calls it a communicative competence. Narrative Medicine precisely deals with these needs, therefore critically evaluates the quality of the words between those providing and those receiving care.

In conclusion: machines, computers, and robots are not the antagonist of care; on the contrary, they are a dimension of overall care. A machine will never substitute the need for words: Narrative Medicine represents this necessity, without underestimating contributes from robots.

Share:

Written by

The author didnt add any Information to his profile yet

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.