CALENDER FAQs CONTACT TV

ARTICLES NEW LAWS ARTICLE




A Legal Guide to Digital Ethics Issues for Attorneys


Share
Awards Edition 23'


Modern Cases

2023-08-10

WRITTEN BY:
Prof. Thomas Freeman, Dr. Aaron McKain, and Samson Hall



Karsky, Meagan Compton, Emily Atamov, the Creighton Business Law and Technology Ethics Research Forum and the Institute for Digital Humanity.







As the pace of technological change accelerates, the law is struggling to catch up. What happens to individual privacy in a world where so much data has been collected that our ‘secrets’ are mostly known? Who will be held responsible for civil rights violations committed by machines? How must our courts adapt when more and more lawsuits will challenge these algorithmic decisions?







An increasing number of the decisions once made by humans are now made by algorithms, automated processes used by computers. These algorithms (which are, by default and design, often highly inaccurate and discriminatory) are increasingly used by prospective employers, landlords, businesses, health providers, police, schools, and government agencies.







People too often treat algorithms like calculators and their decisions like solutions to math problems. When a machine is tasked with something objective like adding two numbers, we can reliably trust and use the answers it produces. However, machines are being tasked with complex decision making that cannot simply be reduced to binary code (i.e., algorithmic error rates); fails to understand the “intersectional” nature of our fellow citizen’s identities (i.e., algorithmic discrimination); unjustly uses prior data to re-reinforce old stereotypes and systemic disadvantages (i.e., Algorithmic Jim Crow); or fails to holistically judge and evaluate an individual and their circumstances (as employees, defendants, patients, and suspects). Here are a few examples of problems with algorithmic decision making.







Algorithms in Law Enforcement







Many police departments use predictive policing algorithms to predict where crime is likely to occur. Predictive policing systems use data about where crimes occurred in the past to predict how to deploy police forces. The technology surrounding predictive policing labels certain neighborhoods as “hotspots” or red zones. However, these neighborhoods are typically minority neighborhoods that are already subject to unwarranted and heavy police presence. The data fed to predictive policing algorithms is inevitably influenced by past issues with racist officers, practices, and in some cases even departments, meaning that officers and departments with good intentions can become unwitting instruments of racism and bias. A predictive policing algorithm may send officers to a predominantly minority neighborhood expecting to find trouble simply because of past racism that was, over time, incorporated into the department’s policing practices.







Algorithms in Health Care







Access to healthcare is a fundamental, and often life or death, human need. And yet, the same issues that plague algorithmic decision making in other contexts—error rates, systemic discrimination, re-purposing old data sets that have already failed a generation of marginalized citizens, and a lack of holistic understanding by the machine—are now emerging in health care contexts as doctors and hospitals increasingly turn to automated diagnostic and treatment protocols. Tragically, algorithms seem to be “re-coding” historical discrimination as they are used to make medical decisions. Racial biases are being programmed into medical devices, programs, and algorithms and putting human lives in danger.







Algorithms in Employment







Companies are increasingly using artificial intelligence to determine whether individuals see job postings and to evaluate them if they apply.