Introduction

General background

Welfare states aim to ensure that vulnerable citizens have a reasonable quality of life by providing care and support. This includes those who are elderly and frail. The population of older people is increasing. Between 2015 and 2020, the number of people in the UK general population aged over 65 is expected to increase by 12% (1.1 million); those over 85 by 18% (300,000); and the number of centenarians by 40% (7000). The general population is expected to rise by 3%. (House of Commons Library 2015) Across the world those aged 65 and over are predicted to outnumber children under 5 years old by 2020. (Suzman et al. 2015) Population ageing is a long-term trend that began several decades ago in Europe. The proportion of the population aged 65 years and over is increasing in every EU Member State, European Free Trade Area country and candidate country. The increase within the last decade ranges from 5.2 percentage points in Malta and 4.0 percentage points in Finland, to less than 1.0 percentage points in Luxembourg and Belgium. Eurostat explains the trend by reference to increased longevity and consistently low levels of fertility. (Eurostat 2015) At the time of the 2011 UK census 9.2 million residents were aged 65 and over, an increase of almost 1 million from 2001. Results show that just 50% of those aged over 65 reported their health to be “very good” or “good”, compared with 88% of the rest of the population. In 2011, 56% (5.2 million) of those aged 65 and over were living as a couple, an increase from 52% (4.3 million) in 2001. Those living as married couples increased from 51 to 54% and the proportion living as cohabiting couples almost doubled from 1.6 to 2.8%. Around a third (31%) of those aged 65 and over were living alone in 2011; this was a decrease from 34% in 2001. Accordingly, welfare states face increasing costs for the care and social support for older people who are unable to live independently.

Older people are remaining in their own homes for longer. The proportion of the UK population aged 65 and over who were living in communal establishments declined from 4.5% (374,000) in 2001 to 3.7% (337,000) in 2011. (ONS 2013) The number of older people receiving support organised and/or funded by local authority social care services in the UK is declining, severely affected by budget cuts to care services. Spending on home care services fell by 19.4% between 2010/11 and 2013/14. This has resulted in a 15% decline in the number of older people receiving local authority support with home care from 437,150 in 2010/11 to 371,770 in 2013/14. (Mortimer and Green 2015; Health and Social Care Information Centre 2014) Older people are themselves unpaid care providers: 14% living in households in England and Wales supplied unpaid care in 2011, compared to 12% in 2001. Those aged 65 and over may be providing 50 hours or more unpaid care a week: up from 4.3% (341,000) in 2001 to 5.6% (497,000) in 2011. (ONS 2013) Spending on aids and adaptations has increased by 7.3% since 2010/11 but the number of older people benefiting from these services has fallen by 83,945 (Mortimer and Green 2015; Health and Social Care Information Centre 2014).

Social robots are being developed to meet the shortfall in care, and also to assist those providing unpaid care. They are potentially important not only in relieving loneliness but in helping their users maintain a normal routine in the face of frailty or in supporting them through the process of rehabilitation.

Overarching aim

The overarching aim of this paper is to add to the debate about the values that should underlie the development and integration of social robots into the homes of older people, given the trends reported in the previous section. Social robots can provide a ‘presence’ in the home of an older person that other technology cannot. But in order to assist the user by fetching and carrying, by keeping track of his or her preferred routine and acting as an early warning system for a health emergency, the robot can sometimes be intrusive, collect and communicate data potentially at variance with the user’s wishes, and help to connect the user to an outside world that can present dangers. This paper attempts to determine the relative weight of values like privacy, autonomy, and safety when the overall aim of assistive technology is to help older people retain as much of their autonomy as younger people do. These reflections are made in the light of qualitative data about these values collected from older people and their formal and informal care providers. The values were drawn from an independently devised philosophical framework that suggested an order of priority among values like autonomy, independence safety, and social connectedness (Sorell and Draper 2014). The research looked for the order of priority implicit in users’ and carers’ responses. These responses were elicited independently of any exposure to our framework as we wanted to see whether these spontaneous responses would agree with or call into question the philosophical framework. The research was embedded in a wider programme of robotics research called Acceptable robotiCs COMPanions for AgiNg Years (ACCOMPANY).

ACCOMPANY and the embedded ethics research

The aim of ACCOMPANY was to develop:

a robotic companion as part of an intelligent environment, providing services to elderly users in a motivating and socially acceptable manner to facilitate independent living at home… provid[ing] physical, cognitive and social assistance in everyday home tasks, and … contribut[ing] to the re-ablement of the user, i.e. assist the user in being able to carry out certain tasks on his/her own. (accompanyproject.eu)

The ACCOMPANY system used the Careobot3 platform, which is mobile and has a manipulating arm, is capable of working autonomously in a smart home environment and “co-learns” with its user (Amirabdollahian et al. 2013). The target user was a cognitively able older person, living alone, whose physical health and memory were starting to decline, and whose ability to live independently in his or her own home was threatened. Investment in systems such as those developed by ACCOMPANY are one response to increases in the population of older people who are unable to care for themselves. They also address the need to offer and provide acceptable care as economically as possible.

The findings reported in this paper represent the second and third phases of three phases of ethics research that was undertaken as part of the ACCOMPANY project.

The three phases of our ethics research can be seen in Fig. 1. In Phase One we proposed an initial ethical framework for the development of social robots on the basis of: (i) a review of the philosophical literature on the ethics of designing and using social robots; and (ii) the purposes of the robot being designed in ACCOMPANY. In the resulting paper (Sorell and Draper 2014), we suggested six values (respect for autonomy, safety, enablement, independence, privacy and social connectedness) that should inform the design of social robots for older people keen to continue to live independently despite growing frailty. We aimed to explore whether the six values we had proposed at the conclusion of Phase One (Sorell and Draper 2014) would be reflected in the responses of potential stakeholders to situations that might arise when social robots are integrated into the homes of older people. We also wanted to know whether our view that autonomy was the overriding value would be reflected in their intuitions and whether other values might emerge that we had not considered.

Fig. 1
figure 1

Three phases of ethics research on ACCOMPANY

We also suggested an order of priority among these values, where they conflict. We argued that tensions between these values (especially between safety and autonomy, autonomy and independence, safety and privacy and sometimes between autonomy and social connectedness) were inevitable, and that where such tensions arose, autonomy should be regarded as the overriding value.

In the study reported here, we aimed to explore whether the six values we had proposed at the conclusion of Phase One (Sorell and Draper 2014) would be reflected in the responses of potential stakeholders to situations that might arise when social robots are integrated into the homes of older people. We also wanted to know whether our view that autonomy should be the overriding value would be reflected in their intuitions and whether other values might emerge that we had not considered. In this paper we report and discuss the reactions of older people and informal and formal carers of older people to scenarios making explicit possible tensions between these values. The data were collected and analysed during Phase Two of our research for ACCOMPANY. In Phase Three, the findings were integrated into the initial framework developed in Phase One, and we considered how our overall findings should influence design, policy and practice concerning social robots for older users.

Other authors have explored from a purely conceptual point of view the potential ethical difficulties that arise when social robots are designed to assist with care provision. For instance, Sharkey and Sharkey (2012) and Kortner (2016) highlight a range of issues related specifically to the care of older people; Coeckelbergh (2015) relates intuitions about care, autonomy and related notions to general considerations about modernity; and, van Wynsberghe (2013) applies the theoretical perspective of care ethics to produce her value-sensitive design approach. Similarly, Vallor (2011) provides a comprehensive review of relevant ethics literature up to 2010 in her analysis of the ideal of care in relation to the use care-providing technology; Sparrow (2015) argues that robotic design for older users should be geared to promoting happiness rather than to achieving seemingly objective measures of welfare; and, Matthias (2015) addresses the issue of deception that may arise when the mental image older users have of social robots diverges from the current technological capacities. Our research contributes to this growing literature. This paper is distinctive because it reports and takes into account the views of potential user groups in reaching conclusions about ethical design of social robots and their integration into the homes of older people. In this respect it moves beyond the literature that depends on purely conceptual analysis, reflecting the empirical ethics approach that is increasingly being used to enhance bioethics analysis (e.g. see Frith 2012).

After outlining the method used in Phase Two (data collection and analysis) and reporting and discussing our findings (including their limitations), we suggest how tensions between the six values mentioned in (ii) above can be managed in practice. We also comment on design, policy and practice issues.

Method

We devised four realistic scenarios (see Table 1) based on the projected capabilities of the ACCOMPANY system and the target user group. The scenarios reflected situations in which the some of the values distinguished in the philosophical framework could be in tension. Focus groups of older people and formal and informal carers of older people were asked to comment on the scenarios.

Table 1 The scenarios discussed in the focus groups

In the first scenario, the robot is programmed to encourage Maria to move around at home and take her medication in line with medical advice. Visiting healthcare professionals can access information stored by the robot about Maria’s adherence to this healthcare regime. Here there is potential for more than one kind of tension between different pairs of autonomy, independence, privacy and safety. In the second scenario, Frank is autonomously resisting attempts to help him widen his social network (social connectedness) by programming the robot to encourage him to access an online forum about fishing, which used to be his main leisure activity. The third scenario was devised to draw out issues raised by the empathic ‘mask’ being developed in ACCOMPANY. This mask was intended to simulate a companion’s responses to events in the user’s environment (e.g. alarm at a plant being knocked over) or annoyance or sadness if the user over-used a squeeze-sensitive interface for summoning the robot urgently (Marti et al. 2014). In the scenario, the robot is programmed to respond negatively to rudeness on the part of its user, Nina. Nina’s rudeness is disrupting her care-relationships and causing distress to her daughter (autonomy, independence, social connectedness). In the final scenario, privacy, independence, autonomy and safety are in tension as Louis resists attempts by his family to programme his robot to alert them when he falls, and his family wish to place controls on his using his robot as an interface for online gambling activities.

The method for data collection and analysis used in this project has already been peer-reviewed and published in detail elsewhere (Draper et al 2014b; Bedaf et al. 2016). Accordingly, it is only reported in brief here. Working with the ACCOMPANY user panels established by consortium members Centre Expert in Technologies and Services Maintien en Autonomie à Domicile des Personnes Âgées (MoDPA), Hogeschool ZUYD (ZUYD) and University of Hertfordshire (UH), along with the Birmingham One Thousand Elders, University of Birmingham (UoB), 21 focus groups (FGs) were convened at the four different sites in France, the Netherlands and the UK (respectively). These included 123 participants who were older people, or informal (family members, friends etc.) or formal (paid, trained) carers of older people (see Table 2).

Table 2 Focus groups and participants

Written consent was obtained from all participants prior to participation. FGs were conducted in local languages by local facilitators, with each site using the same facilitators for all groups. To ensure consistency across the sites, a topic guide with a series of prompts was designed, and the FG facilitators discussed in advance how this should be used to ensure common understanding.

The FGs were audio-recorded and transcribed verbatim. One representative transcript from each kind of group (older people, informal carers, formal carers) was translated into English from French and Dutch. All of the English transcriptions were then coded independently by Draper and Sorell using a combination of directed analysis (seeking to identify text that corresponded to the six values identified in Phase One) and Richie and Spencer’s framework analysis (Ritchie and Spencer 2002). This resulted in a high degree of inter-coder agreement. The resulting coding and the emerging themes were discussed with the other facilitators, who coded the outstanding non-English transcriptions, noting any disconfirming data and new themes, and identifying and translating illustrative quotations. Draper discussed the resulting coding one to one with each of the two other coders. A draft report was then circulated to all facilitators for comment and agreement. Figure 2 summarizes how data was collected, analysed and combined to reduce inconsistency between the four sites and different countries. The data were analysed by group—older people (OP), informal carers (IC) and formal carers (FC). Codes and themes were identified within these groups of data, and a data set was produced for each group of quotations from different participants within each focus group for each of the codes. The main themes were organised into group mind-maps to highlight inter-connections. These mind maps can be seen in Figs. 3, 4 and 5.

Fig. 2
figure 2

Method of data collection and analysis

Fig. 3
figure 3

Mind map of analysis of older people groups. In this figure the black boxes represent themes that pervaded all of the other main themes, which are represented by the grey boxes. The lines between the boxes show more specific inter-relations. For instance, views about social connectedness, behaviour modification, safety and privacy were all conditioned by views about autonomy, whereas the views about the role of the robot arose mostly in relation to privacy and safety

Fig. 4
figure 4

Mind map of the analysis of the informal carer groups. In this figure, grey boxes represent the main themes and white boxes the sub-themes. The lines between the boxes show the inter-relationships between the themes and sub-themes. For example, persuasion was a theme in its own right but views about persuasion influenced views about resistance, autonomy, the need for a human element and family/caring issues (which were also themes in their own right) and it led to, or influenced, discussions about how the robot was introduced into the home of an older person and relationships with professional carers

Fig. 5
figure 5

Mind map of the analysis of the formal carers groups. In this figure, grey boxes represent the main themes and white boxes the sub-themes. The lines between the boxes show the inter-relationships between the themes and sub-themes. For example, respect for autonomy was a theme in its own right but views about respect for autonomy influenced views about the role of negotiation, safety, privacy and how the robot was perceived (which were also themes in their own right) but it was less influential in the sub-themes than perceptions of the robot, which led to, or influenced, discussions about the need for a human element, adherence and relationships within care teams. Protecting or promoting the best interests of older people, on the other hand, was a sub-theme in considerations of respect for autonomy, safety and negotiation

Favourable local ethical review was obtained by each participating centre, and EU ethical standards were always observed.

Findings

This was a large study by the standards of qualitative research. Here we report the main results, namely whether and how participants invoked the six values from Phase One in their discussion of the scenarios. We report how tensions between the six values were addressed and whether autonomy was given more weight than other values when it conflicted with other values.Footnote 1

We will follow qualitative reporting norms, providing illustrative quotations from our data set. Qualitative analysis is a process of interpretation that takes into account the strength of the views expressed as well as how often they were similarly expressed within the different participant groupings. The purpose is to explore the views of the participants. Accordingly, no attempt will be made here to quantify the views or to generalise from them.

We will start with general responses to the scenarios.

Responses to the tensions

The participants tended to regard the scenarios as practical problems of reconciling user and carer interests where there was disagreement over how the robot was to be used. We noted three broad problem-solving strategies used by participants: (a) finding a process through which the parties in the scenario could reach agreement after compromising or trying out one another’s suggested uses of the robot; (b) reading different roles for the robot into the scenarios and then applying relevant role-norms to the tension raised by the scenario; (c) hypothesising an agreement between users and providers of the robot prior to its introduction and then referring back to the hypothesised details of this prior agreement to resolve conflicts.

Compromise, persuasion and negotiation

All three groups sought to accommodate the interests of the disagreeing parties, giving some weight to all of the values in play. Participants often referred to examples from their own experience of using compromise, persuasion or negotiation when providingFootnote 2 or receiving care. OP participants talked in terms of compromises between parties to the tension in which everyone conceded something or sought common ground. The IC participants relied heavily on persuasion as the means of bring the older person around to a view that would resolve the tension. FC participants tended to speak about the need to negotiate with older people. For example, in the second scenario, Frank resists his daughter’s idea of using the robot to connect him with an online group of fishermen. One OP participant suggested Frank try out—get a “taste” of—the online group before definitively rejecting it. This is a typical example of compromise. An IF participant suggested that the experience of the online forum could be contrived ‘by accident’ for Frank by his daughter in the hope that he might thereby be persuaded to try using it. A FC participant responding to the Maria scenario thought that compliance should be negotiated so that Maria could decide when to schedule movement: in this way adherence to her medical regime would not conflict with other things she wanted to do, such as watching a favourite television programme:

1. Well if would she could just show him a taste of… just a taste. If he doesn’t like it well she backs off, she’s tried just to show him (UoB OP1 P2Footnote 3 FRANK)

2. You could pretend you pressed the wrong button on the robot or something and saw it by chance. By the time he’s tried to find out what’s happened or you tell him the truth, he’ll have seen the channel and may well be interested. Sometimes you have to use fair means and foul to change people’s minds… (MADoPA OP1 P4 FRANK)

3. Actually it should be such that persons are able to modify the time schedule a little bit, it should not be a black and white option like six o’ clock is six o’ clock, or 8 is 8, with no room for adaptation (ZUYD FC1 P4 MARIA)

UoB OP1 P2’s comment (quotation 2) above is typical of the way in which the autonomy of the older person was used to define the limits of these processes. In broad terms, the OP participants—whilst sympathetic to the problems that this could create—tended to feel that the wishes of the older person should prevail if a mutually satisfactory compromise could not be reached. IC participants tended to accept that persuasion would only take them so far towards a resolution, and that they might ultimately have to capitulate to the older person. FC participants were also inclined to accept that they were not able to force a settlement but seemed generally less willing to make concessions to their clients than the IC participants. In quotation 4, for instance, FCs are discussing how to manage complaining about unavoidable lateness by firmly explaining the constraints under which they re operating. In quotation 5 FCs are discussing Louis’ unwillingness to let the robot to monitor and report his falls. Here accepting the alert is presented as a concession Louis needs to make to enable his continued care at home.

4. They’ll understand, but they’ll still make some kind of comment like, “Ah, did you sleep through your alarm clock?”, and I’ll say, “No, but sometimes the unexpected happens”, and they’ll say “True enough”. And if it goes too far, as it has done sometimes already, I’ll say things like, “What if something happened to you? Would you like it if after half an hour I said to you, listen I have to go now because someone else is waiting for me? What would you say? I’m sure you’d rather I stayed with you.” After that, they tend to calm down, but you always have to talk to them and explain things! (MADoPA FC1 P5 LOUIS)

5. the bottom line is ‘Louis you wanna stay in your own home, but you’re not the only person involved in this, we don’t have any peace of mind unless you agree to, this is the bare minimum, you gonna let us be alerted when you fall on the floor, else we can’t support you staying at home any more’ (UH FC P5 LOUIS)

What participants doubted, however, was whether a robot would be capable of the persuasion or negotiation in which human carers regularly engaged. For this reason they believed that robots could not replace humans (many also believed that robots should not replace humans). In their responses to the scenarios, many assumed or asserted the need for a human intermediary between the older person and the robot, who would persuade or conduct the negotiation.

6. it still requires a person to explain this to her and model it to her and to see if she can actually do it because she might not be able to do it (UH IC P1 NINA)

7. That’s the thing that’s going to make the difference between a carer and a machine. A professional care worker is going to be able to stimulate, encourage and repeat all these requests, and so on, and also explain again and again why we’re there, why that person has to get up and go for a walk, etc. I think that’s what’s likely to make the difference (MADoPA FC P7 MARIA)

IC and FC participants tended to consider that older people were resistant to change and could be stubborn.

8. I think that these older people, they will not go with the robot, really! From the experience with my father… He would not say something like, ‘OK I will walk’, more like: ‘switch that device off’ (ZUYD IC2 P3 MARIA)

They tended to anticipate that older people would have difficulties accepting a robotic carer. OP participants did not always bear out this view. They did not question the presence of the robot. Instead, they commented on aspects of the robot’s actual or potential behaviour that they would not/did not like, and also on what they thought would be advantageous about having a robotic carer (see Draper et al. 2014b).

Assigning roles that imply norms

Another strategy that our participants employed to resolve tensions between values in scenarios was to refer to norms associated with particular roles, which were then applied to the robot. The participants did not individually or within particular focus groups or group types consistently assign the robot a specific role; instead, participants assigned different roles in different circumstances. The most commonly referred-to roles were servant, healthcare provider (see UoB OP3 P7 quotation 12 below), or extension of a human healthcare provider and companion:

9. I think his [Louis] relationship with the robot is the best one. He actually looks on it as a [5: friend!] helping with his life and supporting him (UoB OP2 P2 LOUIS)

10. The advantage of a robot, it’s, you were talking, you had a home-help two hours, three hours per week, the robot, once it’s there and equipped, can work 10 hours a day. That doesn’t bother it (MADoPA IC3 P1 MARIA)

Here the idea that there was no upper limit on the time demands that can be made on the robot is linked to its being a machine. Unsurprisingly the idea that the robot was machine or thing (as opposed to a person) was expressed often. The following is a typical reaction, especially when the robot in the scenario had been programmed, or could be programmed, to be more assertive:

11. To me a robot will always be a machine (MADoPA IC1 P2 MARIA)

The participants associated different norms with different roles. For example, assigning the robot the role of a servant enabled them to assert that users could reasonably expect the robot to do as it was told. On the other hand, when they felt that it was reasonable for the robot to be programmed to resist certain activities—gambling for example in the Louis case—this was because they thought it would be wrong for a healthcare professional to introduce, facilitate or appear to encourage a patient to gamble.

12. it is a bit like the nurse coming in and saying ‘Shall we have a game of poker?’ isn’t it. And you wouldn’t expect that (UoB OP3 P7 LOUIS)

Postulating and adhering to a prior agreement

Finally, some participants assumed that in the pre-history of the scenario situations the parameters for robotic behaviour had been agreed with the older person in advance. They referred to a prior agreement as a mechanism for enforcing expectations in practice. This meant, for example, that even if they regarded the robot as a machine or servant to be commanded they could, at the same time, limit what a user might command it to do. Prior agreement was also a mechanism for respecting autonomy since it was implied that if someone had agreed to do something, other things being equal, they would have done so autonomously and should abide by the agreement.

13. You have chosen yourself to have that thing in your house, so you also have to accept the things it does. (ZUYD OP1 P2 MARIA)

14. I’m assuming that this isn’t forced on her she agreed to have a robot, so stay at home and have a robot rather than sort of saying ‘Right, if you don’t have it you have got to go to care’ so it’s not something she has got to have. It’s something that she makes the choice to have the robot and I think you made that choice she has got to pay a little attention to it even if it is a robot. (UH OP P2 MARIA)

We now turn to our participants’ views on each of the six values in the ethical framework from the Phase One ethics work.

Autonomy

Autonomy is the capacity to make choices and lead one’s life as one chooses. All types of participants agreed that being older was not itself a reason for taking such choices away from people.

15. Elderly people still have their personal freedom and if they say no it should be no, shouldn’t it?” (MADoPA OP1 P1 MARIA)

16. [older people] are still capable of making their own decisions. (ZUYD IC1 P3 LOUIS)

17. It always comes back to the fact that what the professional care worker needs or wants is not necessarily what the user needs or wants. Our priority is the user’s need or want and we have to take it into account. We aren’t going to do anything without the user; if he or she doesn’t want to do something, we can’t force them to do so against their wishes. (MADoPA FC1 P6 FRANK)

Participants were aware, however, that if older people had lost, or were beginning to lose, their mental abilities, this might be a reason for giving less weight to their choices, especially when these choices posed a risk to safety or well-being or where they depended for their fulfilment on the cooperation of a reluctant carer.

By using the compromise, persuasion or negotiation processes to resolve conflicts in the scenarios, participants were already giving considerable weight to the autonomy of the older person, but alongside the autonomy of formal and informal human carers (as reported above). Robots, on the other hand, do not have autonomy, and some participants did not like the idea of a ‘mere machine’ apparently going against the autonomous wishes of the older person. Others thought that the ability of the robot to persist where humans might become exhausted or demoralised was valuable (as suggested by participant MADoPA IC3 P1 in quotation 10 above). Equally, however, it might be a disadvantage if it only served to wear-down the older user to the point of compliance as this would be coercion not persuasion, and would undermine autonomy.

18. I’ve got a slight problem with this nagging if you’re saying that that it could go on prompting you because it knows you haven’t moved. Presumably it’s recording that. I’ve got a slight problem that this is very Big Brother-ish we’re going to catch you out if you try and lie to us about what you’re doing (UoB OP2 P1 MARIA)

As we have seen, our participants generally favoured autonomy-promoting paternalism delivered by means of human persuasion. Robot pressure on the older person’s behaviour, by contrast, had to be time-limited: for the participants, the robot could only go so far before the will of the older person had to prevail. This was partly because participants were concerned that the older person might depend on the robot, and therefore be vulnerable to harm if the robot refused to help. For instance, in the first scenario, there was concern that Marie would become dehydrated if the robot engaged in a battle of wills with her over whether she went to the kitchen herself to fetch a drink.

All groups acknowledged that in the care triad (older person-informal carers-formal carers), the wishes and interests of people other than the older householder needed to be taken into account. Consideration of a conflict of these interests was prompted by the fourth scenario, where Louis’ reluctance to programme the robot to alert carers to his falling had resulted in his spending a long time on the floor, which had in turn increased care demands on his daughters-in-law. Of the three groups, the FC participants were generally less willing to settle conflicts in favour of the older person, though they were not especially sympathetic to the interests of informal carers. Rather they drew attention to the fact that they were themselves a limited resource that had to be distributed fairly among their clients (as illustrated MADoPA FC1 P5’s comment quotation 4 above).

Independence

People are independent when they are able to act on their choices without significant help from others. The ACCOMPANY project envisaged a care-robot not for the incapacitated or seriously disabled but for those who as a result of increasing age-related frailty find it harder to carry out certain tasks, e.g. lifting or house-cleaning. Independent older people might be able to carry out these tasks while taking longer—perhaps much longer—to do them than younger people. On the other hand, they do not depend on others to decide on their activities, or to feed and clean themselves, or to take medication.

Views about independence (as distinct from autonomy) were not especially prominent in the focus group discussions. A few participants noted that the way a householder chose to use the robot could erode its ability to promote independence. They noted, for instance, that fetching and carrying functions could disincline users to fetch and carry for themselves, with the possible result that they lose the ability to fetch and carry for themselves and so require more care.

19. “I pay to have someone do things for me”. My response is, “Yes, you pay, but you pay to have someone help you do things”, which people don’t like hearing because for them it’s a case of, “I pay therefore you do it instead of me”. (MADoPA FC1 P4 MARIA)

20. In her situation I wouldn’t actually program the robot at all to get her the treats. Because there isn’t actually a need in her normal state (UH IC P1 MARIA)

This reflects a tension between independence and autonomy that the scenarios were designed to express. The participants seemed to favour a balance between independence and autonomy. For example, they generally supported the idea of a care-robot designed to give reminders to take medication. Difficulties remembering to take medications—due to degrees of memory loss or complexity of medication regimes—are an impediment to living independently, and therefore having reminders was regarded as useful support, but something that fell short of the take-over of the administration of medication.

Enablement

For the purposes of this paper, enablement is a process, possibly involving the care-robot, of acquiring or regaining certain abilities needed in daily life. Participants’ attitudes to enablement were mixed. They could see the value of a robot that was able to help older people regain or acquire skills, but they worried about coercion. As already mentioned, participants expressed doubts about the robot’s ability to persuade, and they were concerned about the robot forcing cooperation from its user.

The scenarios provided different examples of enablement that the ACCOMPANY robot might support. Although we had envisaged that participants’ views about enablement would be elicited mainly by the first scenario, the other scenarios prompted interesting comments as well. What emerged was a spectrum of views on health-related enablement, with reminders to take prescribed medicine at one end and health-promotion at the other. Possible interventions were placed on the spectrum according to whether participants thought that a particular behaviour should or should not be modified. So, reminders to take prescription medicine was regarded a relatively uncontroversial, whereas using the robot to prevent smoking, alcohol consumption, physical inactivity and poor diet were more controversial. They were more controversial because participants doubted that robots designed to help older people should limit people’s liberty to take risks with their own health.

21. I don’t think a robot is a power thing that can change behaviour…. It is her choice. A robot can’t be used as a power to change the behaviour of an adult woman. That is my opinion. (ZUYD FC2 P6 NINA)

A particular concern was that robot monitoring might be used to interfere with the user’s choices (again typified by the comment from UB OP2 P1 quotation 18 above). Some participants seemed to worry that permitting the robot to modify the behaviour of users at all would be the start of a slippery slope leading to the robot’s taking control. Other participants were concerned about robot interference in possibly harmful but nevertheless autonomous choices expressive of the user’s strong or characteristic preferences.

22. I think if they’re constrained to the physical assistance then that is fine as it’s when they stray into this kind of behaviour modification and all the rest of it, it starts to get a bit worrying (UoB OP2 P4 NINA)

Participants did not necessarily approve of the choices individuals made in the scenarios (Frank’s gambling was considered reckless by some, for instance) but they regarded interference with some choices as an attempt to change what someone was like. This they generally disapproved of, particularly in relation to the Nina scenario (we have reported this finding in detail elsewhere, (Draper and Sorell 2014). For many different reasons, then, the participants often seemed to favour autonomy over interventions for the sake of enablement (as the ZUYD participant PC2 P6 quotation 21 above suggests).

In fact, the tension between autonomy and enablement may be more complicated. Participants did not disapprove of efforts to enable older people; they were concerned that these efforts would be made by the robot as opposed to human carers who were able to negotiate with the older person. In each of the groups, negotiation or persuasion was regarded as completely acceptable, so long as rejection of suggested behaviours was open to the older person.Footnote 4 Participants were concerned about whether the robot would be so inflexible as to be coercive, and as we have already seen our participants tended to doubt that a robot could replace a human when it came to coaxing the older person.

Enablement can include rehabilitation, which often requires an effort on the part of the person seeking to be re-enabled. This may consist of effort in the face of physical discomfort, and frustration associated with an action that could previously be performed with ease. Technology sometimes accommodates more passive rehabilitation (as in the case of mechanical devices that gently and repeatedly move limbs to rebuild muscle strength and increase movement range) but even these may require the user to make some effort and endure some discomfort. Such devices, although they are set up by physiotherapists, remain in the control of the user; if the machine-assisted movement causes too much pain, the user may simply stop using the device. The question is whether the older person could and should have a similar level of control over a care-robot.

In the ACCOMPANY system the scope for the robot to control the older person was very limited. It could verbally encourage movement (‘come to the window’) or perhaps resist a command (refuse to fetch something to encourage the person to get it (move) for her/himself). These constraints were reflected in the scenarios and topic guide. Some participants imagined the robot turning off the TV or positioning itself in front of it against Marie’s wishes until Marie elevated her leg. Even though the participants were not averse to robotic enablement, they disapproved of the robot’s seemingly asserting itself. Sometimes participants appeared indirectly to express a fear that a robot might force someone to perform painful movements, which they regarded as unacceptable,Footnote 5 even where these were part of a therapeutic regime.Footnote 6

23. I am not sure if a robot, if it can be forceful…if you do not walk with me, I will not do that or whatever (ZUYD IC2 P1 MARIA)

Many of our participants felt that if a user was unwilling to cooperate with the enabling functions of the robot, it was not unreasonable for the authority paying for the robot to remove and reallocate it.

24. To begin with, if someone wants a robot in their home, if they decide to get one, then what’s the point if afterwards they actually don’t listen to it?… To my way of thinking, with the robot it’s the as when you go to see a doctor. If you don’t take the medication he prescribes for you, why bother going in the first place? (MADoPA OP1 P7 General reflection on all cases at the end of the FG session)

25. That they actually sign that they agree to having this robot instead of going into a care home because the function of this robot is not just to be useful but also for health and safety. (UH IC P4 LOUIS)

Safety

Safety is being insulated from sources of harm. The insulation can be provided by one’s own choices and policies or by the interventions and policies of others.

The safety of the older householder was discussed in response to all of the scenarios. It was a concern for some participants even where a scenario was not designed to emphasise safety. Some participants were concerned that harm could befall Marie and Nina if the robot refused to act on their instructions.

26. I think it’s dreadful that[the] machine… actually not do what it’s supposed to do [4: frightening] [2: I find that quite quite] scary Yeah and I think that’s awful to have, to programme a machine that that sort of won’t help her (UoB OP2 P5 NINA)

Participants were also concerned about the potential dangers of internet interactions in the Frank scenario, and the risks of gambling in the case of Louis.

In the Louis scenario as well, many were very uncomfortable about Louis being able to prevent robot-alerts about his falls, or remaining on the floor for long periods following a fall. In this scenario Louis was in control of the programming and elected not to programme the robot to summon help, a decision that was questioned by his daughters-in-law. In all the focus groups, the predominant feeling was that the robot should summon help in the event of a fall regardless of the older person’s wishes to the contrary.

27. Only then and not every time. He indeed falls multiple times a day and you don’t have to be alarmed every time, but you can set the sensors that they send an alarm if he’s on the same spot for 10 minutes. (ZUYD FC2 P4 LOUIS)

28. I mean probably the robot would only need alert with falls when he stayed down. (UH OP P2 LOUIS)

As the above comments suggest, the most commonly proposed compromise was that the householder be given time to get up before the robot alerted external agents, but our participants mostly supported the use of a default alert setting: the householder could choose within narrow limits how quickly an alert was issued, but would not be able to override the default setting completely. They thought that the user could also be given a choice about whom to notify—this might not be the daughters-in-law in the case of Louis—but they seemed to suggest that it would be unacceptable for no-one to be alerted. There was no specific agreement amongst participants about the precise point the alert would be sounded regardless of the users’ wishes. Instead, participants spoke vaguely about the point at which the user would suffer harm if help was not forthcoming.

Participants appealed to role-norms in this connection. They found it incongruous that a robot carer could be present and not summon help. Some participants tended to anthropomorphise the robot, thinking of it as a human being standing idly by and doing nothing. For others, the robot represented a safety net that should not be disabled.

29. P7: It should at least raise the alarm. According to this example, we’re dealing with a gentleman who falls a lot but generally manages to get up again by himself, but the day he didn’t manage, the robot didn’t do anything.

P1: Precisely!

P7: The robot should have raised the alarm. (MADoPA FC1 LOUIS)

For our participants, keeping older people safe from particularly serious harm was close in importance to autonomy in a hierarchy of values.

Privacy

A person enjoys privacy when there is restricted access to information about them, including information that can be gained by observation. Our participants generally agreed on the importance of privacy. Our FC groups tended to discuss privacy in relation to formulas and routines that they took to be embedded in their professional codes of conduct and good practices. Other groups tended to describe their views in terms of unwelcome intrusion or ‘Big Brother’ surveillance—quotation 18 above is typical in this respect.

At the same time, there was little resistance to, and some positive support for, information being accessed directly by health professionals for therapeutic purposes. Here the robot seems to have been regarded as an extension of the healthcare professional or a therapeutic tool. Nevertheless, participants were concerned about health information being accessed by or passed to family members/informal carers. In this connection they seemed to be applying the norms of medical confidentiality.

30. I think that’s more medical but I think, so I don’t think the daughters-in-law need to be informed of that, but falls that he didn’t get up from, yes… I don’t think they should be entitled to know anything that’s too personal. I think his personal life at his age as he is obviously still ‘compos mentis’ it’s his business. They should be entitled to know things that deal with his safety. (UH OP P2 LOUIS)

The FC participants in some groups were concerned that the robot could be used to monitor the care they were providing.

31. P4: I think it’s all very ‘Big Brother is watching you’ if you have such a thing in your home and it can be programmed at all times to turn against me.

P1: Yes. You could look at it like that. (ZUYD FC2 NINA)

Our participants did not have a clear view of what the robot would be recording and in what format. A robot could in principle make continuous video recordings similar to a CCTV camera. Whether this would be privacy-violating would depend on why and how the recordings were made, what was recorded, who could access these recordings and on whose authority, how secure the data-storage system was, and how long the data was stored. For instance, visual images of robot-human interactions might be useful for enablement. The robot might be able to enhance the user’s recall by providing pictures of when s/he last ate or drank, took tablets, telephoned a family member etc. (Ho et al. 2013). Some of the FC participants thought it would be useful to access information stored by the robot.

32. They could look at the print out together, that wouldn’t be quite as invasive as the robot saying: ‘Actually she didn’t do that when I told her three times and she didn’t get up!’ (UH FC PF MARIA)

33. They cannot cheat, right?… That is the difference. The measures are taken and the robot sends them on to the physician. So there is no possibility to add a few degrees, or make it some degrees less. (ZUYD FC1 P2 MARIA)

Social connectedness

Social connectedness is having regular exposure to an interaction with other people, often other people with whom one has things in common. It is valuable because it alleviates loneliness, and has other benefits. Older people are more likely than middle-aged adults to lose their friends or spouses through death. Some forms of disability and incapacity due to old age can also make their surviving friendships among old people less valuable. Social interactions stimulate people and make cognitive and other demands they would not otherwise meet.

The importance of social connectedness was reflected in the discussions of the OP and IC groups, but was not prominent in reactions of the FC group, which tended to concentrate on the way their own interactions with older people could not be simulated or reproduced by care-robots. Participants from other groups tended to agree that at least some human interaction was irreplaceable.

34. I suppose a robot is not like a human you can interact with really…It will do requests and what you need, or it’s programmed to, y’know remind you of things. But it’s not the same as having a person who you can talk about anything to. (UoB OP1 P2 LOUIS)

35. …we rely a great deal on neighbours, a great deal indeed. It’s really important for people to be integrated into their community. (MADoPA IC1 P1 MARIA)

Social contact and being part of a community were considered valuable quite apart from receiving care from humans. Some of the groups did, however, discuss how social connectedness provided a care safety net for older people. For instance, being integrated into a community meant that neighbours and others noticed deviations from normal behaviour (not opening shutters or not being seen out and about) that could indicate an older person in difficulties.

The Frank scenario was designed to elicit reactions to a potential tension between autonomy and social connectedness. In response, some participants drew a distinction between loneliness and being alone, recognising that not all people who are socially isolated actually want or miss human company.

36. I know three people who are in their mid and late nineties. Two are very active, very outgoing…One will not [go out]. And that is the fundamental difference between them and they have been like that all their lives. (UoB OP1 P7 FRANK)

Nevertheless, participants were generally in favour of coaxing older people at least to try to remain socially connected. This suggests that they thought people should not settle for loneliness by the mechanism of adaptive preference.

Our participants discussed both virtual social connectedness and maintaining relationships by video-calling and social networking sites. The participants who spoke in these discussions all seemed to be familiar with this use of the internet. They found interactions using Skype/internet useful, and many readily likened the type of use proposed to Frank in the second scenario to their existing use of personal computers/tablets. Reactions to purely virtual relationships tended to be guarded. In the OP groups particularly, many participants were not convinced that virtual relationships were a substitute for what they termed ‘real’ relationships.

Undoubtedly, older people who do not or cannot use the internet will face increasing social exclusion in the future. It might, therefore, be useful for a robot to encourage the use of the internet for purposes that connect older people to social institutions and services as well as maintain and form new, more personal relationships. In this respect, the participants’ distinction between ‘real’ and virtual interactions has less and less application. One French participant—whose views are not representative of participants at large—was puzzled by the attitude of others in his group to the use of the internet. He said:

37. The word virtual is used, and is used when a screen is involved. When you’re on the phone with someone, the word virtual is never used to describe it. [Others interject: But it’s the same thing] Yes it is, so why is it that we don’t use the word ‘virtual’ when telephones are involved but do when there’s a screen, whereas with a screen we actually add something and can see the person we’re talking to? I’ve been wondering about this for some time, I don’t understand why. (MADoPA IC1 P1 FRANK)

Discussion

Insofar as the scenarios were designed to indicate potential tensions between the values we had already proposed on philosophical grounds, our data did not suggest that the value framework required significant addition or revision. Participants tended to recognise the importance of all of the values proposed without apparently calling attention to entirely new ones. They also tended to prioritise autonomy over all but safety where there was a risk of serious harm. Here we discuss a selection of the results reported above before looking at how the value framework might influence the design of social robots and how they are introduced into the homes of older people.

The value framework—the six values plus a weighting of their relative importance—could be interpreted as supporting autonomous decisions with ill effects on informal carers or friends or even state welfare services. We have commented elsewhere (Draper and Sorell 2013) on the ethical difficulties that may arise when telecare technology can detect falls and older users disable this equipment. Falls undoubtedly create demands on health services and can lead to longer term difficulties and health problems for older people—even those who up to the point of falling were fairly independent. Where these demands are made on resources in welfare states, it may be reasonable to ask or even require citizens to minimise these demands. This may mean not using services frivolously, taking precautions against infection, or adhering to advice and treatment regimes. In the same way, people might be asked to minimise demands on informal carers. If someone is dependent on the good will of others for help, this provides them with a reason not turn for help unnecessarily. Arguably, the more dependent one is, the greater the need for cooperation that prevents greater dependence or dependence in emergencies on informal carers. Co-operating with a robot care regime may be a case in point, but unless the care-robot can provide everything provided by informal carers, the interests of informal carers should play some part in negotiations leading to its installation into the older person’s home. The participants tended to agree with that line of thought. However, the interests that informal carers believed were relevant were only those directly related to the care they provided.

Different considerations were thought relevant in different circumstances. Participants did not believe informal carers had the right to frustrate older people’s life-style choices, even if they cost money and threatened carers’ inheritances. The issue was posed clearly by the fourth scenario, in which Louis was involved in online gambling, with all its risks of increasing dependence. Getting into debt was generally viewed as socially irresponsible, justifying restricted gambling stakes (e.g. by imposing the ‘affordability’ ceiling). If the robot is the medium through which socially irresponsible behaviour is facilitated, then modifying the programming to prevent such behaviour may be acceptable. On the other hand, the ‘good will’ that should motivate the provision of informal care might not be compatible with limiting spending that erodes an inheritance. In all cases, however, it is important to bear in mind that limitations of this kind are not confined to older people; they apply to anyone who is autonomous but dependent—regardless of age.

Some FC participants complained that they did not have sufficient time with their clients, and in a different context they explained that they sometimes had to spend more time than expected with one client, which made them late for an appointment with another. Additional time pressures created by care-robot monitoring of carers may generate hostility to the robot unless this information is also used to improve FCs’ working conditions. Employing the robot to “police” care may discourage poor care practices, with benefits to older people. But it can also intrude on the privacy of the older person. Striking a balance between monitoring for good practice and privacy may be difficult where care involves nudity or captures private conversations. Recording would almost certainly require the consent of the older person, except where there were suspicions of both poor care and intimidation. The requirement that care/medical interactions be video recorded—and kept as part of a patient’s medical record—is already being considered in some jurisdictions. Recordings could provide a definitive account of an interaction in the event of legal challenge, disciplinary action or unforeseen outcome. Such a policy raises complex data protection and access issues. For instance, access to recordings might only be granted for audit purposes or where there were suspicions about misconduct.

In our view, using the robot to police care would not violate the privacy of formal carers.Footnote 7 All care compromises patient/client privacy to some degree. Ensuring that appropriate care is provided may necessitate careful record-keeping to facilitate a smooth hand-over between carers, and so that care can be audited and improved. Human carers themselves see aspects of a person’s life that they would not otherwise witness. Providing good care may depend in part on remembering these details, but even if it did not carers could not be required to forget them. Humans cannot will a loss of memory. Carers may be required to recount their experiences to others, or they may be required not to disclose them. At other times disclosure may be selective or heavily edited. The robot that records all its interactions with a user is in some senses similar to a human carer with a memory and does not therefore raise any greater concerns for privacy than human care does. The privacy concerns are raised by access to information. In this respect a gossipy and judgemental human carer may be more invasive than a care-robot.

The practice of having carers explore recorded information about the behaviour of the older person with that older person could be a useful way of resolving obstacles to adherence to a care regime (see quotation 32). However, the comment supplied by ZUYD FC1 P2 (quotation 33) points to a different and perhaps questionable reason for accessing this record, namely to check the veracity of the patient/client. Where the robot collects data to enable a willing householder to be more independent, the data collection does not violate privacy. And more data may be therapeutically better than less. Consider, for instance, a robot that monitors whether medication has been taken and issues a reminder when it is not taken. Such a robot might be more enabling than a robot that acts like an alarm clock and simply reports that now is the specified time to take the medication. In the former case, the user has the opportunity to remember for herself to take her medication; in the latter she may come to rely on the alarm rather than her own memory. She may be helped to live independently, but she may also become increasingly dependent on the robot to provide the reminder as her capacity to use her own memory is eroded.

To act as an alarm clock the robot does not need to collect personal information. To issue the reminder, the robot needs to monitor what the user is doing. More information is stored (not just what medication should be taken and when, but also whether it has been taken), but with an enabling purpose. Assuming this information is only accessible to the older person (in the form of the enabling reminder to which she has agreed) her privacy is not violated. In the latter case, however, there may be a concern that the person will take the medication twice—once of her own volition and then again when the robot issues the reminder. It may therefore be argued that the robot needs to monitor whether the medication is taken in order to cancel an unnecessary reminder. Here safety concerns begin to surface that appear to conflict with the protection of privacy. Another issue might be that, like a human carer, the robot should be able to monitor medication adherence so as to alert someone if non-adherence reaches a dangerous level. This would be consistent with the position outlined above with regard to falls: the older person may want a higher threshold for intervention than carers are comfortable with, but a default position that harmonises with the views of our participants is that if the threat to safety is significant, the robot should raise an alarm.

Programming a robot to alert someone if medication is not taken might violate privacy. It might deprive the user of the liberty—available to all other competent adults—of not complying with a care regime. These concerns may nevertheless be outweighed by considerations of harm. The unauthorized but perhaps justified transmission of information to a third party adds to any violation of privacy. Some loss of privacy is the inevitable result of being cared for—by a robot or a human alike. On the other hand, it may be an avoidable violation of privacy for healthcare professionals to have access to information stored by the robot (for the purpose of routinely monitoring adherence and honest reporting of adherence—the kind of use suggested by ZUYD FC1 P2 in quotation 33). Patients can be inaccurate or dishonest in reporting their adherence to a care regime as well as their intake of alcohol and calories etc. (Buetow et al. 2009). One response to this is for practitioners to be sceptical about patient reporting and adjust their judgements accordingly. This scepticism is caricatured by the TV character Dr Gregory House, whose approach is encapsulated by statements such as:

  • “I don’t ask why patients lie, I just assume they all do”;

  • “It’s a basic truth of the human condition that everybody lies. The only variable is about what”;

  • when you want to know the truth about someone that someone is probably the last person you should ask”.

It could be argued that patient dishonesty should not be encouraged and that therefore programming the robot so as to prevent a sceptical Dr House from interrogating its data is to collude with patient dishonesty. It might be argued that even if other patients are able to get away with being dishonest, that does not mean that older patients with a robot should be able to, and the relevant difference is not age but the presence of the robot.

On the other hand, this argument may overlook an important difference between robotic and human carers and companions: robots are not moral agents. One of the reasons privacy is compromised when one takes a carer or companion (or even a servant) into one’s home is that this person can neither avoid exposure to personal information, nor avoid making sense of the information to which they are exposed. There is a shared understanding of the potential normative implications of a carer’s seeing an unexpected person sharing the householder’s bed, overhearing a phone call to the betting office or alcohol retailer, or reading aloud a letter from a solicitor about changes to a will. If confronted by a wife or child asking questions about these events, a human servant/carer/companion has to make a normative judgement about the relative importance of infidelity, gambling, alcohol use and disinheriting a family member compared with some prior agreement to maintain confidentiality. For the robot there is no such tension. This could be regarded by some older people as a potential advantage of having robot as a carer. The robot is not nosey—it has no personal interest in finding certain things out, items of information are merely data. The robot does not secretly or otherwise pass judgement on those it serves. In this respect an all-seeing robot may be less privacy-violating than a human carer who is present less often.

Giving older people control over who can access their personal data from the robot is the best way of protecting their privacy, and also conforms to the norms for data protection. This means that healthcare professionals should not be able to check the veracity of a patient’s reported adherence without that patient’s consent, with the result that patients with a robot have the same scope for deception (or “cheat[ing]”) as patients who do not need robotic care.

Operationalizing the value framework

Rather than a revision of the value framework arrived at in Phase One of the ACCOMPANY project, the results from the focus groups seem to call for the operationalization of its emphasis on autonomy. Operationalization includes, crucially, the processes by which the robot is introduced to the user’s home in the first place. Our participants seemed independently to arrive at our view that it is reasonable to hold users to an agreement made in advance of the robot being introduced. The agreement should set out the purposes that e.g. a local housing, council or health authority had in offering a care-robot, and we are assuming that one of these purposes is—as in the ACCOMPANY design brief—the promotion of autonomous and independent living. If this is right, then the agreement needs to reflect processes in which potential users of the robot are: (a) informed of the capacities of the robot; (b) consulted about which of these capacities might be useful to them; and, (c) informed of the options to refuse or withdraw co-operation with the robot in its exercise of capacities that the older person finds useful. The options in (c) might themselves be activated after a trial period without those options, just to make the older person aware of what living with the robot might be like and how useful it could be. Similarly, there might need to be a trial process of withdrawal of the robot, so that the older person can experience what life without the robot would be like if it were withdrawn. Ideally, potential users would be seen individually and face-to-face, with discussion encouraged. No less seems reasonable when so expensive a piece of equipment, and such an unfamiliar one, is introduced for long term use in someone’s home.

In addition to the older person’s own interactions with the robot, the agreement would have to take into account data-retention by the robot and retransmission of the data to: (i) the robot-introducing authority; (ii) formal carers, including healthcare professionals; and, (iii) informal carers and family members. In keeping with the value framework, the older person should normally be given the opportunity to veto data-sharing with certain groups listed, or certain members of groups listed.

Beyond any trial period with the robot, a pattern of non-adherence to the agreement could be allowed to develop up to a threshold where an interview about removing the robot was triggered. Allowing the older person a chance to see and discuss the evidence of non-adherence might be important to a subsequent decision on their part to co-operate with the aims of enablement more wholeheartedly, or it might prompt a reconsideration of the suitability of independent living, or it might call attention to defects in the original agreement that need to be remedied. In any of these cases, the user has an autonomous choice to make.

The process so far outlined does not mention the possibility of a user’s simply turning off the robot’s monitoring functions, or the possibility of overriding an emergency alert. These possibilities are relevant to the older person’s control over information about themselves, including information about falls. Control of such information is greater for unaccompanied older people than for accompanied ones, and loss of control might discourage some older people from opting for a robot companion. Perhaps there is a compromise available where the older person has the option of disrupting monitoring for short, or at least clearly defined, periods. This might appeal to older users who wanted to have very private conversations or engage in some other activity that they thought was very private.

Consideration does, however, have to be given to a potential undesirable side-effect of this measure to protect privacy. This is that an older person may be coerced into turning off the monitoring facility by a carer wishing to conceal poor care (i.e. where a robot is also being used to police the standard of care provided). Mechanisms already exist for steps to be taken to protect vulnerable adults where there is suspicion of coercion, and many older people already experience sub-standard care and suffer abuse at the hands of their so-called carers that goes unreported.Footnote 8 So an older person might not be worse off with a facility to disable monitoring than he or she would have been without the robot. A decision has to be made, therefore, about whether the potential policing capacity of the robot is sufficiently great to outweigh the threat to privacy of not being able to disable the monitoring facility.

Although many other matters could in principle be made subject to an agreement, including the threshold that had to be reached for some monitored mishap to count as a genuine emergency, it is not necessary to go much further. The guiding thought is that the process for introducing the robot, as well as the robot itself, has to be sensitive to the wishes of the older person within certain limits. If they are not, both the design of the robot and its method of introduction into a household are ethically flawed.

Operationalizing autonomy is not only a matter of the agreement that lays down the ground rules. It is a matter also of what needs to be done to the robot in design terms. We offer one example here, which relates to concerns that some kinds of robots are infantilising. Such concerns (discussed and countered by Sharkey and Sharkey 2012) are mainly to do with robots that look like children’s toys. Our data suggested another potentially infantilising presentation of enablement to older people. This is what we describe as the ‘let’s do it together’ method of coaxing older people to try new things or engage with enablement. This type of infantilisation does not consist merely of a tone of voice that may be used by the robot (the sing-song tone often used by adults to address a child). ‘Let’s do it together’ coaxing is infantilising because it may fail to engage with the older person’s reasons for not wanting to perform an action or behave in a particular kind of way. It may indeed assume an absence of reasons for not co-operating, questionably positing instead a kind of older person’s stubbornness corresponding to childish refusals to co-operate. Adult-to-adult persuasion operates under a norm of giving reasons to a person which, if accepted, justify the choice of co-operating and make the co-operation autonomous. In seeking to persuade someone, one tries to identify and take seriously any reasons for points of disagreement. One does not just assume that the obstacle is stubbornness or timidity born of having to try something new without support. Only if that sort of timidity is operating is the ‘Let’s do it together’ strategy not infantilising. And arriving at the conclusion that the older person is timid ought to (morally ought to) proceed only after an attempt to identify and articulate reasons for co-operation or non-co-operation.

There may be occasions where ‘doing it together’ is unobjectionable. If someone feels unable to walk in the park because they are afraid of tripping, then offering to walk with them and lend a supportive arm addresses their fear. It takes it seriously and offers a potential solution to a problem that is reducing the choices available to the older person. But if, on the other hand, someone says that they do not care for walking in the rain, offering to get wet with them misses the point. ‘Let’s do it together’ may suggest that, like the child, all the older person requires to change their mind is an encouraging presence while they get on and do something they really do not want to do. When adults form supportive pairs or groups those banding together all want the same thing and feel that they are offering mutual, not patronising, support to achieving an end in which all share; they are doing together what they would struggle to achieve alone. It is a form of solidarity. ‘Let’s do it together’, on the other hand, may be an offer the only aim of which is getting the other person to do something they do not want to do. It is often something the person doing the offering is already able to do effortlessly—it is not necessarily a declaration of solidarity. Designers and programmers therefore need to be aware that in this respect, robotic efforts of the ‘let’s do it together’ variety might always be patronising since the robot is not capable of appreciating the end, whatever it is programmed to say by way of encouragement along the way.

Limitations

We have reflected on how the data gained from three different types of participant (older people, and informal and formal carers of older people) enriches our understanding of the values proposed in Sorell and Draper 2014. In addition to the limitations identified in previous papers (see Draper et al. 2014b; Bedaf et al. 2016), we note that the data we report here were prompted by specific scenarios. The scenarios were generated specifically to emphasise potential tensions between the values we had already identified, and responses confirmed that participants appealed to similar values when addressing scenarios. On the other hand, focussing on scenarios designed to elicit responses to these tensions will almost certainly have biased the results. We cannot be certain that participants would not have volunteered additional values in response to different types of scenarios, e.g. those in which the robot was programmed to express affection or affirm the qualities of the user. Moreover, we were guided by the remit of the ACCOMPANY project and therefore did not include cases where the robot needed to meet the needs of multiple users in the same home. Only more empirical research will reveal whether designers and those responsible for policy and practice should consider additional guiding values for the design of social robots. This empirical work will need to overcome the difficulties of getting lay participants to engage with lists, or frameworks of, values in the abstract. Moreover, the qualitative work necessary to achieve this will then needed to be followed-up with quantitative research to determine the extent to which its findings can be generalised.

Conclusions

Our findings generally supported the priority of autonomy where it conflicts with other values, but suggested that safety issues may perhaps be more significant than we had previously supposed. That said, the participants’ concerns were subtle. The robot itself was not regarded as dangerous. Rather concerns seemed to centre on how safe it was to replace human judgement with robotic programming. Some of the concerns were highly paternalistic, which may reflect general attitudes to older people, as well as concerns about the potential deficits of robotic care.

Our findings echo the concern expressed more widely that robots should not be used to replace human–human interaction. Our findings also reinforce concerns that robot care may increase social exclusion. Efforts must be made, therefore, to use robots to increase the range of interactions of users outside the home.

Our findings suggest that a care-robot designed to be persuasive may be preferable to a robot designed to be persistent. Whilst the potential tirelessness of the robot overcomes the challenges to patience of human–human interaction, it can be associated with coercion, which acts against autonomy; persuasion, by contrast, facilitates autonomy. On the other hand, we have identified ways in which ostensibly persuasive techniques of robot-assisted care can be infantilising. Acceptable enablement is constrained by the need to change the behaviour of users in some cases whilst continuing to acknowledge their capacity for autonomous decision-making.

The perceived role of the robot is crucial to determining the norms against which the behaviour of the robot is judged. The greater the variety of potential interactions between the older person and the robot, the greater is the potential for confusion about the appropriate norms to apply. This potential confusion may also encourage ‘slippage’ in that the older person—and others involved in supporting his or her independent living—may be inclined to manipulate the norms to de-emphasise enablement and independence. Devices simpler than companion robots might pre-empt this problem, but at the cost of eliminating “presence” in the life of older persons (in the sense of “presence” used in Sorell and Draper 2014).

Concerns about the potential of robots to erode privacy may extend beyond the user to the human-carers of that older person. Some formal carers raised the issue of the robot being used to ‘spy’ on them, whilst other formal carers did seem willing to use the robot to check up on, as well as to reinforce, adherence to treatment regimes. All forms of human care are likely to intrude to some extent on the privacy of the recipient of that care. Robots may be less intrusive by comparison. As for adherence, it does not seem acceptable to use the robot’s data-recording capacities to second guess the older person’s own testimonies. The value of the robot’s capacity to retain and share information for the purposes of enablement is best maintained by ensuring that privacy norms are respected and the older person retains control of information that the robot gathers. Consideration of privacy in relation to multiple householders and issues of who ‘owns’ different types of information that the robot may collect were beyond the scope of our study. Further work is undoubtedly needed to classify the information that the robot collects and to establish criteria for legitimate access to and use of different kinds of information. This means taking account of the different kinds of value (commercial and ethical) of information the robot has to collect in order to maintain functioning. The aim of making the development of assistive technology profitable and affordable has to be set against the risks that older people will see no benefit from the commercial value of the data generated about them. This leads us to our final conclusions in relation to the terms under which robots are introduced into the homes of older people.

We have signalled in several places the significance of achieving a shared understanding of the role, capabilities and potential behaviours of the robot. The values we have emphasized will need to be operationalized. One critical stage of the operationalization is the introduction of a robot into someone’s home for the first time. The value framework suggests that this should be a process rather than an event. We have demonstrated that agreements between providers and individuals have to be reached in order for tensions between the values in our framework to be resolved. These agreements cannot depend on generalised information about older users but need to be individualised. Having individualised agreements is in line with the invocation of prior agreements by participants when they tried to resolve tensions raised by the scenarios. Arriving at the right agreement depends on respecting the older person who is going to be subject to it, and ensuring that their autonomy and privacy are not considered less important than those of other adults in similar situations. We have tended to suggest that autonomy overrides other values when there is a conflict, but it is not the only value relevant to care arrangements. Indeed, our participants thought that other values, particularly safety, were sometimes as weighty or even more weighty than autonomy.