At the Brevillier Village retirement community in Erie, Penn., residents have a new and convenient way to unlock doors to enjoy walks and other activities on the community grounds. Residents look at a small device which scans their iris and, if they’re registered as having outdoor access, the door unlocks and they’re free to walk. Iris scans of residents who have been diagnosed with dementia or other serious cognitive impairments will not unlock the doors, keeping them safe inside until someone can accompany them. The process takes just a few seconds and frees residents from the hassle of carrying access cards, remembering passcodes, or finding facility staff whenever they want to leave. Better living through technology?
For some, iris recognition technology brings to mind the dystopian society portrayed in the movie Minority Report, where citizens’ eyes are frequently and continuously scanned for questionable security, surveillance and commercial purposes. Pervasive scanning would raise a host of ethical questions and objections. What counts as consent to intrusive scans and the collection and storage of behavioural information? Should people have a right to opt out of public scanning, and how might that work? What must public and private organizations do to prevent personal information from being stolen or used for unintended or malicious purposes, and what responsibilities do they have when information is stolen?
While these concerns make for great science fiction—and should be kept in mind when pervasive public scanning emerges—they stretch the ethical imagination far beyond what is happening at Brevillier Village. The real issue that should concern us relates to the health, safety and other implications of the broader trend towards replacing human-provided care with technology-based care. Iris scanning at Brevillier Village is just a small example, but as the cost of providing healthcare and other services to an aging population rises, we will see technologies being adopted more often and will need to consider their ethical implications.
READ MORE: The Ethics Lab archive
A useful approach for assessing new technologies in health and senior care is to ask whether they would improve or harm seniors’ overall well-being, while being mindful of costs to society more broadly. Iris scanning at Brevillier Village appears to contribute to seniors’ well-being in many ways. Safety—especially the safety of those with cognitive impairments—is improved as only those who should have to access to the grounds can unlock doors. With cards and codes, there is always a risk that they might fall into the hands of seniors who should not have them. Arguably, the technology also contributes to residents’ autonomy and sense of control. To go where they want, a resident needs only to have an eye scan, rather than finding and requesting assistance from staff. And if the iris scanning technology is cheaper over the long term than staff costs, residents and those paying for services would face lower bills.
Harder to measure, but perhaps more important, are the health effects of replacing human interaction with technology. Loneliness and isolation are prevalent among seniors and they are known to exacerbate mental and physical health conditions. When technology reduces human contact, it can increase loneliness and isolation and worsen health and well-being. While iris scanning would reduce human contact only marginally, as more labour saving technologies are introduced the frequency and quality of seniors’ interactions with caregivers and others could decline considerably. Discrete technological changes have little effect, but accumulated changes could produce outcomes that we would not have chosen for ourselves or seniors.
With an aging population, we will face very strong incentives to adopt labour- and cost-saving technologies. Indeed, there are limits to how much we can pay for and participate in providing care. Trade-offs between seniors’ health and well-being and keeping costs manageable may be necessary. Some people will deny that such trade-offs exist, but that’s naïve. We should not pretend that these are consequence-free decisions. More technology might be the right path, all things considered, but we should be mindful of what we lose in the process and what costs we impose on others when we make those choices. A simple iris scan might increase autonomy and lower healthcare costs, but the price might be a forgone greeting and smile. Is that a price worth paying?
Daniel Munro teaches ethics in the Graduate School of Public and International Affairs at the University of Ottawa. Listen to The Ethics Lab on Ottawa Today with Mark Sutcliffe, Thursdays at 11 EST.