1364:(or table of confusion). Explainable AI to detect algorithm Bias is a suggested way to detect the existence of bias in an algorithm or learning model. Using machine learning to detect bias is called, "conducting an AI audit", where the "auditor" is an algorithm that goes through the AI model and the training data to identify biases. Ensuring that an AI tool such as a classifier is free from bias is more difficult than just removing the sensitive information from its input signals, because this is typically implicit in other signals. For example, the hobbies, sports and schools attended by a job candidate might reveal their gender to the software, even when this is removed from the analysis. Solutions to this problem involve ensuring that the intelligent agent does not have any information that could be used to reconstruct the protected and sensitive information about the subject, as first demonstrated in where a deep learning network was simultaneously trained to learn a task while at the same time being completely agnostic about the protected feature. A simpler method was proposed in the context of word embeddings, and involves removing information that is correlated with the protected characteristic.
483:
474:. It has also arisen in criminal justice, healthcare, and hiring, compounding existing racial, socioeconomic, and gender biases. The relative inability of facial recognition technology to accurately identify darker-skinned faces has been linked to multiple wrongful arrests of black men, an issue stemming from imbalanced datasets. Problems in understanding, researching, and discovering algorithmic bias persist due to the proprietary nature of algorithms, which are typically treated as trade secrets. Even when full transparency is provided, the complexity of certain algorithms poses a barrier to understanding their functioning. Furthermore, algorithms may change, or respond to input or output in ways that cannot be anticipated or easily reproduced for analysis. In many cases, even within a single website or application, there is no single "algorithm" to examine, but a network of many interrelated programs and data inputs, even between users of the same service.
1181:
incredibly diverse, fall within a large spectrum, and can be unique to each individual. People's identity can vary based on the specific types of disability they experience, how they use assistive technologies, and who they support. The high level of variability across people's experiences greatly personalizes how a disability can manifest. Overlapping identities and intersectional experiences are excluded from statistics and datasets, hence underrepresented and nonexistent in training data. Therefore, machine learning models are trained inequitably and artificial intelligent systems perpetuate more algorithmic bias. For example, if people with speech impairments are not included in training voice control features and smart AI assistants βthey are unable to use the feature or the responses received from a Google Home or Alexa are extremely poor.
1085:
categories rather than specific subsets of categories. For example, posts denouncing "Muslims" would be blocked, while posts denouncing "Radical
Muslims" would be allowed. An unanticipated outcome of the algorithm is to allow hate speech against black children, because they denounce the "children" subset of blacks, rather than "all blacks", whereas "all white men" would trigger a block, because whites and males are not considered subsets. Facebook was also found to allow ad purchasers to target "Jew haters" as a category of users, which the company said was an inadvertent outcome of algorithms used in assessing and categorizing data. The company's design also allowed ad buyers to block African-Americans from seeing housing ads.
851:. The designers had access to legal expertise beyond the end users in immigration offices, whose understanding of both software and immigration law would likely have been unsophisticated. The agents administering the questions relied entirely on the software, which excluded alternative pathways to citizenship, and used the software even after new case laws and legal interpretations led the algorithm to become outdated. As a result of designing an algorithm for users assumed to be legally savvy on immigration law, the software's algorithm indirectly led to bias in favor of applicants who fit a very narrow set of legal criteria set by the algorithm, rather than by the more broader criteria of British immigration law.
798:, which compares student-written texts to information found online and returns a probability score that the student's work is copied. Because the software compares long strings of text, it is more likely to identify non-native speakers of English than native speakers, as the latter group might be better able to change individual words, break up strings of plagiarized text, or obscure copied passages through synonyms. Because it is easier for native speakers to evade detection as a result of the technical constraints of the software, this creates a scenario where Turnitin identifies foreign-speakers of English for plagiarism while allowing more native-speakers to evade detection.
718:
current large language models, as they are predominately trained on
English-language data, often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent.
56:
672:, which have co-created a working group named Fairness, Accountability, and Transparency in Machine Learning. Ideas from Google have included community groups that patrol the outcomes of algorithms and vote to control or restrict outputs they deem to have negative consequences. In recent years, the study of the Fairness, Accountability, and Transparency (FAT) of algorithms has emerged as its own interdisciplinary research area with an annual conference called FAccT. Critics have suggested that FAT initiatives cannot serve effectively as independent watchdogs when many are funded by corporations building the systems being studied.
700:
resources to help patients with complex health needs. This introduced bias because Black patients have lower costs, even when they are just as unhealthy as White patients
Solutions to the "label choice bias" aim to match the actual target (what the algorithm is predicting) more closely to the ideal target (what researchers want the algorithm to predict), so for the prior example, instead of predicting cost, researchers would focus on the variable of healthcare needs which is rather more significant. Adjusting the target led to almost double the number of Black patients being selected for the program.
818:
more students were likely to request a residency alongside their partners. The process called for each applicant to provide a list of preferences for placement across the US, which was then sorted and assigned when a hospital and an applicant both agreed to a match. In the case of married couples where both sought residencies, the algorithm weighed the location choices of the higher-rated partner first. The result was a frequent assignment of highly preferred schools to the first partner and lower-preferred schools to the second partner, rather than sorting for compromises in placement preference.
1155:
this, some of the accounts of trans Uber drivers were suspended which cost them fares and potentially cost them a job, all due to the facial recognition software experiencing difficulties with recognizing the face of a trans driver who was transitioning. Although the solution to this issue would appear to be including trans individuals in training sets for machine learning models, an instance of trans YouTube videos that were collected to be used in training data did not receive consent from the trans individuals that were included in the videos, which created an issue of violation of privacy.
830:
effect would be almost identical to discrimination through the use of direct race or sexual orientation data. In other cases, the algorithm draws conclusions from correlations, without being able to understand those correlations. For example, one triage program gave lower priority to asthmatics who had pneumonia than asthmatics who did not have pneumonia. The program algorithm did this because it simply compared survival rates: asthmatics with pneumonia are at the highest risk. Historically, for this same reason, hospitals typically give such asthmatics the best and most immediate care.
555:
credit scores). Meanwhile, recommendation engines that work by associating users with similar users, or that make use of inferred marketing traits, might rely on inaccurate associations that reflect broad ethnic, gender, socio-economic, or racial stereotypes. Another example comes from determining criteria for what is included and excluded from results. These criteria could present unanticipated outcomes for search results, such as with flight-recommendation software that omits flights that do not follow the sponsoring airline's flight paths. Algorithms may also display an
1279:, where a transparent algorithm might reveal tactics to manipulate search rankings. This makes it difficult for researchers to conduct interviews or analysis to discover how algorithms function. Critics suggest that such secrecy can also obscure possible unethical methods used in producing or processing algorithmic output. Other critics, such as lawyer and activist Katarzyna Szymielewicz, have suggested that the lack of transparency is often disguised as a result of algorithmic complexity, shielding companies from disclosing or investigating its own algorithmic processes.
1223:
result for all people, while fairness defined as "equality of treatment" might explicitly consider differences between individuals. As a result, fairness is sometimes described as being in conflict with the accuracy of a model, suggesting innate tensions between the priorities of social welfare and the priorities of the vendors designing these systems. In response to this tension, researchers have suggested more care to the design and use of systems that draw on potentially biased algorithms, with "fairness" defined for specific applications and contexts.
22:
577:
1102:
shown to be limited by the racial diversity of images in its training database; if the majority of photos belong to one race or gender, the software is better at recognizing other members of that race or gender. However, even audits of these image-recognition systems are ethically fraught, and some scholars have suggested the technology's context will always have a disproportionate impact on communities whose actions are over-surveilled. For example, a 2002 analysis of software used to identify individuals in
551:, for how a program assesses and sorts that data. This requires human decisions about how data is categorized, and which data is included or discarded. Some algorithms collect their own data based on human-selected criteria, which can also reflect the bias of human designers. Other algorithms may reinforce stereotypes and preferences as they process and display "relevant" data for human users, for example, by selecting information based on previous choices of a similar user or group of users.
758:
1243:, a process in which "scientific and technical work is made invisible by its own success. When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity. Thus, paradoxically, the more science and technology succeed, the more opaque and obscure they become." Others have critiqued the black box metaphor, suggesting that current algorithms are not one black box, but a network of interconnected ones.
689:. Such ideas may influence or create personal biases within individual designers or programmers. Such prejudices can be explicit and conscious, or implicit and unconscious. Poorly selected input data, or simply data from a biased source, will influence the outcomes created by machines. Encoding pre-existing bias into software can preserve social and institutional bias, and, without correction, could be replicated in all future uses of that algorithm.
1054:, judges were presented with an algorithmically generated score intended to reflect the risk that a prisoner will repeat a crime. For the time period starting in 1920 and ending in 1970, the nationality of a criminal's father was a consideration in those risk assessment scores. Today, these scores are shared with judges in Arizona, Colorado, Delaware, Kentucky, Louisiana, Oklahoma, Virginia, Washington, and Wisconsin. An independent investigation by
1185:
is a lack of explicit disability data available for algorithmic systems to interact with. People with disabilities face additional harms and risks with respect to their social support, cost of health insurance, workplace discrimination and other basic necessities upon disclosing their disability status. Algorithms are further exacerbating this gap by recreating the biases that already exist in societal systems and structures.
1107:
facial recognition software most likely accurately identified light-skinned (typically
European) males, with slightly lower accuracy rates for light-skinned females. Dark-skinned males and females were significanfly less likely to be accurately identified by facial recognition software. These disparities are attributed to the under-representation of darker-skinned participants in data sets used to develop this software.
696:. The program accurately reflected the tenets of the law, which stated that "a man is the father of only his legitimate children, whereas a woman is the mother of all her children, legitimate or not." In its attempt to transfer a particular logic into an algorithmic process, the BNAP inscribed the logic of the British Nationality Act into its algorithm, which would perpetuate it even if the act was eventually repealed.
467:), and in some cases, reliance on algorithms can displace human responsibility for their outcomes. Bias can enter into algorithmic systems as a result of pre-existing cultural, social, or institutional expectations; by how features and labels are chosen; because of technical limitations of their design; or by being used in unanticipated contexts or by audiences who are not considered in the software's initial design.
1421:. However, this approach doesn't necessarily produce the intended effects. Companies and organizations can share all possible documentation and code, but this does not establish transparency if the audience doesn't understand the information given. Therefore, the role of an interested critical audience is worth exploring in relation to transparency. Algorithms cannot be held accountable without a critical audience.
1565:, which was intended to guide policymakers toward a critical assessment of algorithms. It recommended researchers to "design these systems so that their actions and decision-making are transparent and easily interpretable by humans, and thus can be examined for any bias they may contain, rather than just learning and repeating these biases". Intended only as guidance, the report did not create any legal precedent.
1474:
and power-shifting efforts in the design of human-centered AI solutions. An academic initiative in this regard is the
Stanford University's Institute for Human-Centered Artificial Intelligence which aims to foster multidisciplinary collaboration. The mission of the institute is to advance artificial intelligence (AI) research, education, policy and practice to improve the human condition.
957:, racist views, child abuse and pornography, and other upsetting and offensive content. Other examples include the display of higher-paying jobs to male applicants on job search websites. Researchers have also identified that machine translation exhibits a strong tendency towards male defaults. In particular, this is observed in fields linked to unbalanced gender distribution, including
876:, a software that determines an individual's likelihood of becoming a criminal offender. The software is often criticized for labeling Black individuals as criminals much more likely than others, and then feeds the data back into itself in the event individuals become registered criminals, further enforcing the bias created by the dataset the algorithm is acting on.
636:
their design. Decisions made by one designer, or team of designers, may be obscured among the many pieces of code created for a single program; over time these decisions and their collective impact on the program's output may be forgotten. In theory, these biases may create new patterns of behavior, or "scripts", in relationship to specific technologies as the code
1159:
straight men 81% of the time, and a correct distinction between gay and straight women 74% of the time. This study resulted in a backlash from the LGBTQIA community, who were fearful of the possible negative repercussions that this AI system could have on individuals of the LGBTQIA community by putting individuals at risk of being "outed" against their will.
608:
solved. That means the code could incorporate the programmer's imagination of how the world works, including their biases and expectations. While a computer program can incorporate bias in this way, Weizenbaum also noted that any data fed to a machine additionally reflects "human decisionmaking processes" as data is being selected.
914:, the founders of the company had adopted a policy of transparency in search results regarding paid placement, arguing that "advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers." This bias would be an "invisible" manipulation of the user.
1235:, often exceeding the understanding of the people who use them. Large-scale operations may not be understood even by those involved in creating them. The methods and processes of contemporary programs are often obscured by the inability to know every permutation of a code's input or output. Social scientist
740:
highly grammatically gendered language, revealed that the models exhibited a significant predisposition towards the masculine grammatical gender when referring to occupation terms, even for female-dominated professions. This suggests the models amplified societal gender biases present in the training data.
1586:
fiduciary". It defines "any denial or withdrawal of a service, benefit or good resulting from an evaluative decision about the data principal" or "any discriminatory treatment" as a source of harm that could arise from improper use of data. It also makes special provisions for people of "Intersex status".
1576:
bill in the United States. The bill, which went into effect on
January 1, 2018, required "the creation of a task force that provides recommendations on how information on agency automated decision systems may be shared with the public, and how agencies may address instances where people are harmed by
1468:
and collaboration in developing of AI systems can play a critical role in tackling algorithmic bias. Integrating insights, expertise, and perspectives from disciplines outside of computer science can foster a better understanding of the impact data driven solutions have on society. An example of this
1246:
An example of this complexity can be found in the range of inputs into customizing feedback. The social media site
Facebook factored in at least 100,000 data points to determine the layout of a user's social media feed in 2013. Furthermore, large teams of programmers may operate in relative isolation
1184:
Given the stereotypes and stigmas that still exist surrounding disabilities, the sensitive nature of revealing these identifying characteristics also carries vast privacy challenges. As disclosing disability information can be taboo and drive further discrimination against this population, there
1154:
Facial recognition technology has been seen to cause problems for transgender individuals. In 2018, there were reports of Uber drivers who were transgender or transitioning experiencing difficulty with the facial recognition software that Uber implements as a built-in security measure. As a result of
948:
was cited for gathering data points to infer when women customers were pregnant, even if they had not announced it, and then sharing that information with marketing partners. Because the data had been predicted, rather than directly observed or reported, the company had no legal obligation to protect
793:
Lastly, technical bias can be created by attempting to formalize decisions into concrete steps on the assumption that human behavior works in the same way. For example, software weighs data points to determine whether a defendant should accept a plea bargain, while ignoring the impact of emotion on a
765:
Technical bias emerges through limitations of a program, computational power, its design, or other constraint on the system. Such bias can also be a restraint of design, for example, a search engine that shows three results per screen can be understood to privilege the top three results slightly more
748:
Political bias refers to the tendency of algorithms to systematically favor certain political viewpoints, ideologies, or outcomes over others. Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate
739:
A recent focus in research has been on the complex interplay between the grammatical properties of a language and real-world biases that can become embedded in AI systems, potentially perpetuating harmful stereotypes and assumptions. The study on gender bias in language models trained on
Icelandic, a
623:
per year from 1982 to 1986, based on implementation of a new computer-guidance assessment system that denied entry to women and men with "foreign-sounding names" based on historical trends in admissions. While many schools at the time employed similar biases in their selection process, St. George was
462:
As algorithms expand their ability to organize society, politics, institutions, and behavior, sociologists have become concerned with the ways in which unanticipated output and manipulation of data can impact the physical world. Because algorithms are often considered to be neutral and unbiased, they
1158:
There has also been a study that was conducted at
Stanford University in 2017 that tested algorithms in a machine learning system that was said to be able to detect an individual's sexual orientation based on their facial images. The model in the study predicted a correct distinction between gay and
1009:
A study conducted by researchers at UC Berkeley in
November 2019 revealed that mortgage algorithms have been discriminatory towards Latino and African Americans which discriminated against minorities based on "creditworthiness" which is rooted in the U.S. fair-lending law which allows lenders to use
997:
Biometric data about race may also be inferred, rather than observed. For example, a 2012 study showed that names commonly associated with blacks were more likely to yield search results implying arrest records, regardless of whether there is any police record of that individual's name. A 2015 study
993:
sets. Biometric data is drawn from aspects of the body, including racial features either observed or inferred, which can then be transferred into data points. Speech recognition technology can have different accuracies depending on the user's accent. This may be caused by the a lack of training data
968:
turned off an AI system it developed to screen job applications when they realized it was biased against women. The recruitment tool excluded applicants who attended all-women's colleges and resumes that included the word "women's". A similar problem emerged with music streaming servicesβIn 2019, it
607:
are a sequence of rules created by humans for a computer to follow. By following those rules consistently, such programs "embody law", that is, enforce a specific way to solve problems. The rules a computer follows are based on the assumptions of a computer programmer for how these problems might be
1585:
On July 31, 2018, a draft of the Personal Data Bill was presented. The draft proposes standards for the storage, processing and transmission of data. While it does not use the term algorithm, it makes for provisions for "harm resulting from any processing or any kind of processing undertaken by the
1473:
a proposed framework for facilitating collaboration when developing AI driven solutions concerned with social impact. This framework identifies guiding principals for stakeholder participation when working on AI for Social Good (AI4SG) projects. PACT attempts to reify the importance of decolonizing
1451:
are attempting to create more inclusive spaces in the AI community and work against the often harmful desires of corporations that control the trajectory of AI research. Critiques of simple inclusivity efforts suggest that diversity programs can not address overlapping forms of inequality, and have
1005:
favored white patients over sicker black patients. The algorithm predicts how much patients would cost the health-care system in the future. However, cost is not race-neutral, as black patients incurred about $ 1,800 less in medical costs per year than white patients with the same number of chronic
927:
of Facebook users showed a 20% increase (340,000 votes) among users who saw messages encouraging voting, as well as images of their friends who had voted. Legal scholar Jonathan Zittrain has warned that this could create a "digital gerrymandering" effect in elections, "the selective presentation of
867:
software (PredPol), deployed in Oakland, California, suggested an increased police presence in black neighborhoods based on crime data reported by the public. The simulation showed that the public reported crime based on the sight of police cars, regardless of what police were doing. The simulation
838:
Emergent bias can occur when an algorithm is used by unanticipated audiences. For example, machines may require that users can read, write, or understand numbers, or relate to an interface using metaphors that they do not understand. These exclusions can become compounded, as biased or exclusionary
829:
Unpredictable correlations can emerge when large data sets are compared to each other. For example, data collected about web-browsing patterns may align with signals marking sensitive data (such as race or sexual orientation). By selecting according to certain behavior or browsing patterns, the end
809:
bias is the result of the use and reliance on algorithms across new or unanticipated contexts. Algorithms may not have been adjusted to consider new forms of knowledge, such as new drugs or medical breakthroughs, new laws, business models, or shifting cultural norms. This may exclude groups through
635:
Though well-designed algorithms frequently determine outcomes that are equally (or more) equitable than the decisions of human beings, cases of bias still occur, and are difficult to predict and analyze. The complexity of analyzing algorithmic bias has grown alongside the complexity of programs and
1442:
Amid concerns that the design of AI systems is primarily the domain of white, male engineers, a number of scholars have suggested that algorithmic bias may be minimized by expanding inclusion in the ranks of those designing AI systems. For example, just 12% of machine learning engineers are women,
1167:
While the modalities of algorithmic fairness have been judged on the basis of different aspects of bias β like gender, race and socioeconomic status, disability often is left out of the list. The marginalization people with disabilities currently face in society is being translated into AI systems
940:
was discovered to recommend male variations of women's names in response to search queries. The site did not make similar recommendations in searches for male names. For example, "Andrea" would bring up a prompt asking if users meant "Andrew", but queries for "Andrew" did not ask if users meant to
817:
In 1990, an example of emergent bias was identified in the software used to place US medical students into residencies, the National Residency Match Program (NRMP). The algorithm was designed at a time when few married couples would seek residencies together. As more women entered medical schools,
699:
Another source of bias, which has been called "label choice bias", arises when proxy measures are used to train algorithms, that build in bias against certain groups. For example, a widely-used algorithm predicted health care costs as a proxy for health care needs, and used predictions to allocate
651:
as "algorithmic authority". Shirky uses the term to describe "the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources", such as search results. This neutrality can also be misrepresented by the language used by experts and the media when
615:
if users are unclear about how to interpret the results. Weizenbaum warned against trusting decisions made by computer programs that a user doesn't understand, comparing such faith to a tourist who can find his way to a hotel room exclusively by turning left or right on a coin toss. Crucially, the
554:
Beyond assembling and processing data, bias can emerge as a result of design. For example, algorithms that determine the allocation of resources or scrutiny (such as determining school placements) may inadvertently discriminate against a category when determining risk based on similar users (as in
1310:
Some practitioners have tried to estimate and impute these missing sensitive categorizations in order to allow bias mitigation, for example building systems to infer ethnicity from names, however this can introduce other forms of bias if not undertaken with care. Machine learning researchers have
1106:
images found several examples of bias when run against criminal databases. The software was assessed as identifying men more frequently than women, older people more frequently than the young, and identified Asians, African-Americans and other races more often than whites. A 2018 study found that
1101:
Surveillance camera software may be considered inherently political because it requires algorithms to distinguish normal from abnormal behaviors, and to determine who belongs in certain locations at certain times. The ability of such algorithms to recognize faces across a racial spectrum has been
1084:
algorithm designed to remove online hate speech was found to advantage white men over black children when assessing objectionable content, according to internal Facebook documents. The algorithm, which is a combination of computer programs and human content reviewers, was created to protect broad
1061:
One study that set out to examine "Risk, Race, & Recidivism: Predictive Bias and Disparate Impact" alleges a two-fold (45 percent vs. 23 percent) adverse likelihood for black vs. Caucasian defendants to be misclassified as imposing a higher risk despite having objectively remained without any
1433:
calls for applying a human rights framework to harms caused by algorithmic bias. This includes legislating expectations of due diligence on behalf of designers of these algorithms, and creating accountability when private actors fail to protect the public interest, noting that such rights may be
1344:
A study of 84 policy guidelines on ethical AI found that fairness and "mitigation of unwanted bias" was a common point of concern, and were addressed through a blend of technical solutions, transparency and monitoring, right to remedy and increased oversight, and diversity and inclusion efforts.
1222:
Literature on algorithmic bias has focused on the remedy of fairness, but definitions of fairness are often incompatible with each other and the realities of machine learning optimization. For example, defining fairness as an "equality of outcomes" may simply refer to a system producing the same
1180:
most recently, which establishes that disability is a result of the mismatch between people's interactions and barriers in their environment, rather than impairments and health conditions. Disabilities can also be situational or temporary, considered in a constant state of flux. Disabilities are
952:
Web search algorithms have also been accused of bias. Google's results may prioritize pornographic content in search terms related to sexuality, for example, "lesbian". This bias extends to the search engine showing popular but sexualized content in neutral searches. For example, "Top 25 Sexiest
1477:
Collaboration with outside experts and various stakeholders facilitates ethical, inclusive, and accountable development of intelligent systems. It incorporates ethical considerations, understands the social and cultural context, promotes human-centered design, leverages technical expertise, and
1359:
There have been several attempts to create methods and tools that can detect and observe biases within an algorithm. These emergent fields focus on tools which are typically applied to the (training) data used by the program rather than the algorithm's internal processes. These methods may also
717:
Language bias refers a type of statistical sampling bias tied to the language of a query that leads to "a systematic deviation in sampling information that prevents it from accurately representing the true coverage of topics and views available in their repository." Luo et al.'s work shows that
526:
algorithm may deny a loan without being unfair, if it is consistently weighing relevant financial criteria. If the algorithm recommends loans to one group of users, but denies loans to another set of nearly identical users based on unrelated criteria, and if this behavior can be repeated across
1171:
The shifting nature of disabilities and its subjective characterization, makes it more difficult to computationally address. The lack of historical depth in defining disabilities, collecting its incidence and prevalence in questionnaires, and establishing recognition add to the controversy and
781:
uses unrelated information to sort results, for example, a flight-pricing algorithm that sorts results by alphabetical order would be biased in favor of American Airlines over United Airlines. The opposite may also apply, in which results are evaluated in contexts different from which they are
1530:
the controller should use appropriate mathematical or statistical procedures for the profiling, implement technical and organisational measures appropriate ... that prevents, inter alia, discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion,
977:
Algorithms have been criticized as a method for obscuring racial prejudices in decision-making. Because of how certain races and ethnic groups were treated in the past, data can often contain hidden biases. For example, black people are likely to receive longer sentences than white people who
922:
A series of studies about undecided voters in the US and in India found that search engine results were able to shift voting outcomes by about 20%. The researchers concluded that candidates have "no means of competing" if an algorithm, with or without intent, boosted page listings for a rival
1254:
and the personalization of algorithms based on user interactions such as clicks, time spent on site, and other metrics. These personal adjustments can confuse general attempts to understand algorithms. One unidentified streaming radio service reported that it used five unique music-selection
1397:
Ethics guidelines on AI point to the need for accountability, recommending that steps be taken to improve the interpretability of results. Such solutions include the consideration of the "right to understanding" in machine learning algorithms, and to resist deployment of machine learning in
1547:
that advised on the implementation of data protection law, its practical dimensions are unclear. It has been argued that the Data Protection Impact Assessments for high risk data profiling (alongside other pre-emptive measures within data protection) may be a better way to tackle issues of
726:
Gender bias refers to the tendency of these models to produce outputs that are unfairly prejudiced towards one gender over another. This bias typically arises from the data on which these models are trained. For example, large language models often assign roles and characteristics based on
1038:
claims that the average COMPAS-assigned recidivism risk level of black defendants is significantly higher than the average COMPAS-assigned risk level of white defendants, and that black defendants are twice as likely to be erroneously assigned the label "high-risk" as white defendants.
437:
Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in
1247:
from one another, and be unaware of the cumulative effects of small decisions within connected, elaborate algorithms. Not all code is original, and may be borrowed from other libraries, creating a complicated set of relationships between data processing and data input systems.
735:
Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.
660:
has critiqued algorithms as a new form of "generative power", in that they are a virtual means of generating actual ends. Where previously human behavior generated data to be collected and studied, powerful algorithms increasingly could shape and define human behaviors.
872:, which conducted the simulation, warned that in places where racial discrimination is a factor in arrests, such feedback loops could reinforce and perpetuate racial discrimination in policing. Another well known example of such an algorithm exhibiting such behavior is
1562:
580:
This card was used to load software into an old mainframe computer. Each byte (the letter 'A', for example) is entered by punching holes. Though contemporary computers are more complex, they reflect this human decision-making process in collecting and processing
1556:
The United States has no general legislation controlling algorithmic bias, approaching the problem through various state and federal laws that might vary by industry, sector, and by how an algorithm is used. Many policies are self-enforced or controlled by the
842:
Apart from exclusion, unanticipated uses may emerge from the end user relying on the software rather than their own knowledge. In one example, an unanticipated user group led to algorithmic bias in the UK, when the British National Act Program was created as a
879:
Recommender systems such as those used to recommend online videos or news articles can create feedback loops. When users click on content that is suggested by algorithms, it influences the next set of suggestions. Over time this may lead to users entering a
450:
of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect "systematic and unfair" discrimination. This bias has only recently been addressed in legal frameworks, such as the European Union's
708:
Machine learning bias refers to systematic and unfair disparities in the output of machine learning algorithms. These biases can manifest in various ways and are often a reflection of the data used to train these algorithms. Here are some key aspects:
1326:
Algorithmic bias does not only include protected categories, but can also concern characteristics less easily observable or codifiable, such as political viewpoints. In these cases, there is rarely an easily accessible or non-controversial
1303:. In other cases, the data controller may not wish to collect such data for reputational reasons, or because it represents a heightened liability and security risk. It may also be the case that, at least in relation to the European Union's
1255:
algorithms it selected for its users, based on their behavior. This creates different experiences of the same streaming services between different users, making it harder to understand what these algorithms do. Companies also run frequent
3383:
902:
created a flight-finding algorithm in the 1980s. The software presented a range of flights from various airlines to customers, but weighed factors that boosted its own flights, regardless of price or convenience. In testimony to the
6265:
Elliott, Marc N.; Morrison, Peter A.; Fremont, Allen; McCaffrey, Daniel F.; Pantoja, Philip; Lurie, Nicole (June 2009). "Using the Census Bureau's surname list to improve estimates of race/ethnicity and associated disparities".
1335:
can emerge from a lack of understanding of protected categories, for example, insurance rates based on historical data of car accidents which may overlap, strictly by coincidence, with residential clusters of ethnic minorities.
655:
Because of their convenience and authority, algorithms are theorized as a means of delegating responsibility away from humans. This can have the effect of reducing alternative options, compromises, or flexibility. Sociologist
1150:
In 2019, it was found that on Facebook, searches for "photos of my female friends" yielded suggestions such as "in bikinis" or "at the beach". In contrast, searches for "photos of my male friends" yielded no results.
517:
are concerned with algorithmic processes embedded into hardware and software applications because of their political and social impact, and question the underlying assumptions of an algorithm's neutrality. The term
1088:
While algorithms are used to track and block hate speech, some were found to be 1.5 times more likely to flag information posted by Black users and 2.2 times likely to flag information as hate speech if written in
7079:
Floridi, Luciano; Cowls, Josh; Beltrametti, Monica; Chatila, Raja; Chazerand, Patrice; Dignum, Virginia; Luetge, Christoph; Madelin, Robert; Pagallo, Ugo; Rossi, Francesca; Schafer, Burkhard (December 1, 2018).
998:
also found that Black and Asian people are assumed to have lesser functioning lungs due to racial and occupational exposure data not being incorporated into the prediction algorithm's model of lung function.
897:
Corporate algorithms could be skewed to invisibly favor financial arrangements or agreements between companies, without the knowledge of a user who may mistake the algorithm as being impartial. For example,
585:
The earliest computer programs were designed to mimic human reasoning and deductions, and were deemed to be functioning when they successfully and consistently reproduced that human logic. In his 1976 book
1478:
addresses policy and legal considerations. Collaboration across disciplines is essential to effectively mitigate bias in AI systems and ensure that AI technologies are fair, transparent, and accountable.
1199:
Safiya Noble notes an example of the search for "black girls", which was reported to result in pornographic images. Google claimed it was unable to erase those pages unless they were considered unlawful.
652:
results are presented to the public. For example, a list of news items selected and presented as "trending" or "popular" may be created based on significantly wider criteria than just their popularity.
1010:
measures of identification to determine if an individual is worthy of receiving loans. These particular algorithms were present in FinTech companies and were shown to discriminate against minorities.
4734:
6658:
S. Sen, D. Dasgupta and K. D. Gupta, "An Empirical Study on Algorithmic Bias", 2020 IEEE 44th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 2020, pp. 1189-1194,
2025:
786:
software is used by surveillance cameras, but evaluated by remote staff in another country or region, or evaluated by non-human algorithms with no awareness of what takes place beyond the camera's
5817:
Castelnovo, Alessandro; Inverardi, Nicole; Nanino, Gabriele; Penco, Ilaria; Regoli, Daniele (2023). "Fair Enough? A map of the current limitations to the requirements to have "fair" algorithms".
5381:
Brinkman, Aurora H.; Rea-Sandin, Gianna; Lund, Emily M.; Fitzpatrick, Olivia M.; Gusman, Michaela S.; Boness, Cassandra L.; Scholars for Elevating Equity and Diversity (SEED) (October 20, 2022).
1410:, for example, also suggests that monitoring output means designing systems in such a way as to ensure that solitary components of the system can be isolated and shut down if they skew results.
1503:" in Article 22. These rules prohibit "solely" automated decisions which have a "significant" or "legal" effect on an individual, unless they are explicitly authorised by consent, contract, or
644:
that algorithms require. For example, if data shows a high number of arrests in a particular area, an algorithm may assign more police patrols to that area, which could lead to more arrests.
4171:
3681:
1141:
de-listed 57,000 books after an algorithmic change expanded its "adult content" blacklist to include any book addressing sexuality or gay themes, such as the critically acclaimed novel
531:. This bias may be intentional or unintentional (for example, it can come from biased data obtained from a worker that previously did the job the algorithm is going to do from now on).
969:
was discovered that the recommender system algorithm used by Spotify was biased against women artists. Spotify's song recommendations suggested more male artists over women artists.
4345:
Alexander, Rudolph; Gyamerah, Jacquelyn (September 1997). "Differential Punishing of African Americans and Whites Who Possess Drugs: A Just Policy or a Continuation of the Past?".
1124:
6610:
627:
In recent years, when more algorithms started to use machine learning methods on real world data, algorithmic bias can be found more often due to the bias existing in the data.
563:
are available. This can skew algorithmic processes toward results that more closely correspond with larger samples, which may disregard data from underrepresented populations.
510:. By analyzing and processing data, algorithms are the backbone of search engines, social media websites, recommendation engines, online retail, online advertising, and more.
502:. Advances in computer hardware have led to an increased ability to process, store and transmit data. This has in turn boosted the design and adoption of technologies such as
6585:
1434:
obscured by the complexity of determining responsibility within a web of complex, intertwining processes. Others propose the need for clear liability insurance mechanisms.
1006:
conditions, which led to the algorithm scoring white patients as equally at risk of future health problems as black patients who suffered from significantly more diseases.
2747:
868:
interpreted police car sightings in modeling its predictions of crime, and would in turn assign an even larger increase of police presence within those neighborhoods. The
1456:
to the design of algorithms. Researchers at the University of Cambridge have argued that addressing racial diversity is hampered by the "whiteness" of the culture of AI.
6756:
1515:
of decisions reached. While these regulations are commonly considered to be new, nearly identical provisions have existed across Europe since 1995, in Article 15 of the
6453:
647:
The decisions of algorithmic programs can be seen as more authoritative than the decisions of the human beings they are meant to assist, a process described by author
7451:
Bondi, Elizabeth; Xu, Lily; Acosta-Navas, Diana; Killian, Jackson A. (July 21, 2021). "Envisioning Communities: A Participatory Approach Towards AI for Social Good".
7207:
6925:
6401:
Binns, Reuben; Veale, Michael; Kleek, Max Van; Shadbolt, Nigel (September 13, 2017). "Like Trainer, Like Bot? Inheritance of Bias in Algorithmic Content Moderation".
3865:
6036:
7707:
5134:
4635:
1371:
is being drafted that aims to specify methodologies which help creators of algorithms eliminate issues of bias and articulate transparency (i.e. to authorities or
1291:, are often not explicitly considered when collecting and processing data. In some cases, there is little opportunity to collect this data explicitly, such as in
953:
Women Athletes" articles displayed as first-page results in searches for "women athletes". In 2017, Google adjusted these results along with others that surfaced
4704:
482:
5662:
1072:
rights on the basis of race, since the algorithms are argued to be facially discriminatory, to result in disparate treatment, and to not be narrowly tailored.
1066:
907:, the president of the airline stated outright that the system was created with the intention of gaining competitive advantage through preferential treatment.
1548:
algorithmic discrimination, as it restricts the actions of those deploying algorithms, rather than requiring consumers to file complaints or request changes.
7365:
Bondi, Elizabeth; Xu, Lily; Acosta-Navas, Diana; Killian, Jackson A. (2021). "Envisioning Communities: A Participatory Approach Towards AI for Social Good".
1058:
found that the scores were inaccurate 80% of the time, and disproportionately skewed to suggest blacks to be at risk of relapse, 77% more often than whites.
315:
4726:
4435:
616:
tourist has no basis of understanding how or why he arrived at his destination, and a successful arrival does not mean the process is accurate or reliable.
2032:
1307:, such data falls under the 'special category' provisions (Article 9), and therefore comes with more restrictions on potential collection and processing.
863:, or recursion, if data collected for an algorithm results in real-world responses which are fed back into the algorithm. For example, simulations of the
761:
Facial recognition software used in conjunction with surveillance cameras was found to display bias in recognizing Asian and black faces over white faces.
539:
Bias can be introduced to an algorithm in several ways. During the assemblage of a dataset, data may be collected, digitized, adapted, and entered into a
1208:
Several problems impede the study of large-scale algorithmic bias, hindering the application of academically rigorous studies and public understanding.
989:
cameras were criticized when image-recognition algorithms consistently asked Asian users if they were blinking. Such examples are the product of bias in
7623:
Edwards, Lilian; Veale, Michael (May 23, 2017). "Slave to the Algorithm? Why a Right to an Explanation Is Probably Not the Remedy You Are Looking For".
5522:
3799:
Bond, Robert M.; Fariss, Christopher J.; Jones, Jason J.; Kramer, Adam D. I.; Marlow, Cameron; Settle, Jaime E.; Fowler, James H. (September 13, 2012).
3522:
Sun, Wenlong; Nasraoui, Olfa; Shafto, Patrick (2018). "Iterated Algorithmic Bias in the Interactive Machine Learning Process of Information Filtering".
6119:
4968:
6013:
Sandvig, Christian; Hamilton, Kevin; Karahalios, Karrie; Langbort, Cedric (2014). Gangadharan, Seeta Pena; Eubanks, Virginia; Barocas, Solon (eds.).
4078:
Prates, Marcelo O. R.; Avelar, Pedro H. C.; Lamb, Luis (2018). "Assessing Gender Bias in Machine Translation -- A Case Study with Google Translate".
3471:
958:
522:
describes systematic and repeatable errors that create unfair outcomes, such as privileging one arbitrary group of users over others. For example, a
406:
7034:
2698:
Introna, Lucas D. (December 21, 2006). "Maintaining the reversibility of foldings: Making the ethics (politics) of information technology visible".
814:(the samples "fed" to a machine, by which it models certain conclusions) do not align with contexts that an algorithm encounters in the real world.
4099:
Prates, Marcelo O. R.; Avelar, Pedro H.; Lamb, LuΓs C. (2019). "Assessing gender bias in machine translation: A case study with Google Translate".
1646:
692:
An example of this form of bias is the British Nationality Act Program, designed to automate the evaluation of new British citizens after the 1981
7760:
4850:"Automating Judicial Discretion: How Algorithmic Risk Assessments in Pretrial Adjudications Violate Equal Protection Rights on the Basis of Race"
3337:
1193:
While users generate results that are "completed" automatically, Google has failed to remove sexist and racist autocompletion text. For example,
6845:
6562:
1263:
can run up to ten million subtle variations of its service per day, creating different experiences of the service between each use and/or user.
790:. This could create an incomplete understanding of a crime scene, for example, potentially mistaking bystanders for those who commit the crime.
7933:
6611:
https://venturebeat-com.cdn.ampproject.org/c/s/venturebeat.com/2018/05/31/pymetrics-open-sources-audit-ai-an-algorithm-bias-detection-tool/amp/
5803:
5313:
4025:
1577:
agency automated decision systems." The task force is required to present findings and recommendations for further regulatory action in 2019.
1531:
religion or beliefs, trade union membership, genetic or health status or sexual orientation, or that result in measures having such an effect.
7680:
1448:
1287:
A significant barrier to understanding the tackling of bias in practice is that categories, such as demographics of individuals protected by
7787:
1418:
981:
In 2015, Google apologized when black users complained that an image-identification algorithm in its Photos application identified them as
1936:
5260:
5245:
1615:
774:
mechanism is not truly random, it can introduce bias, for example, by skewing selections toward items at the end or beginning of a list.
202:
167:
7366:
4950:
4786:
810:
technology, without providing clear outlines to understand who is responsible for their exclusion. Similarly, problems may emerge when
3210:
Feng, Shangbin; Park, Chan Young; Liu, Yuhan; Tsvetkov, Yulia (July 2023). Rogers, Anna; Boyd-Graber, Jordan; Okazaki, Naoaki (eds.).
7534:"Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling"
5081:
7138:
6575:
3212:"From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models"
4887:
2304:
1047:
749:
responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.
624:
most notable for automating said bias through the use of an algorithm, thus gaining the attention of people on a much wider scale.
266:
244:
7654:
5472:"Mission β Disability is Diversity β Dear Entertainment Industry, THERE'S NO DIVERSITY, EQUITY & INCLUSION WITHOUT DISABILITY"
5439:
4052:
3906:
1522:
The GDPR addresses algorithmic bias in profiling systems, as well as the statistical approaches possible to clean it, directly in
7735:
6873:
5890:
5160:
1610:
1319:
to propose methods whereby algorithmic bias can be assessed or mitigated without these data ever being available to modellers in
1134:
180:
5987:
Dwork, Cynthia; Hardt, Moritz; Pitassi, Toniann; Reingold, Omer; Zemel, Rich (November 28, 2011). "Fairness Through Awareness".
5916:
2755:
1962:
7060:
4487:
1123:'s recommendation algorithm was linking Grindr to applications designed to find sex offenders, which critics said inaccurately
104:
6787:
3031:
2622:
1375:) about the function and possible effects of their algorithms. The project was approved February 2017 and is sponsored by the
7233:
6967:
6428:
6095:
5026:
Raji, Inioluwa Deborah; Gebru, Timnit; Mitchell, Margaret; Buolamwini, Joy; Lee, Joonseok; Denton, Emily (February 7, 2020).
4298:
3524:
Proceedings of the 10th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management
3151:
3114:
2483:
2447:
1794:
1727:
1690:
399:
325:
279:
234:
229:
6150:
4145:
3412:
727:
traditional gender norms; it might associate nurses or secretaries predominantly with women and engineers or CEOs with men.
3880:
1499:'s revised data protection regime that was implemented in 2018, addresses "Automated individual decision-making, including
1172:
ambiguity in its quantification and calculations. The definition of disability has been long debated shifting from a
961:
occupations. In fact, current machine translation systems fail to reproduce the real world distribution of female workers.
620:
494:, but may be generally understood as lists of instructions that determine how programs read, collect, process, and analyze
434:" outcomes, such as "privileging" one category over another in ways different from the intended function of the algorithm.
6812:
4760:
3005:
7533:
5594:
1399:
378:
350:
345:
239:
4995:"Face recognition algorithms and the other-race effect: computational mechanisms for a developmental contact hypothesis"
7426:
7308:
4849:
4270:
1504:
1492:
1304:
1093:. Without context for slurs and epithets, even when used by communities which have re-appropriated them, were flagged.
452:
338:
207:
197:
187:
5688:
4913:
3352:
664:
Concerns over the impact of algorithms on society have led to the creation of working groups in organizations such as
7899:
7878:
7478:
7394:
6340:
5779:
5057:
4696:
4190:
3710:
3623:
3541:
2284:
1600:
310:
256:
222:
89:
7173:
5941:
Friedler, Sorelle A.; Scheidegger, Carlos; Venkatasubramanian, Suresh (2016). "On the (im)possibility of fairness".
5186:
2374:
Goodman, Bryce; Flaxman, Seth (2017). "EU regulations on algorithmic decision-making and a "right to explanation"".
5347:
869:
392:
296:
142:
7838:
7584:"Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation"
2500:
928:
information by an intermediary to meet its agenda, rather than to serve its users", if intentionally manipulated.
2111:"Toward the Resistant Reading of Information: Google, Resistant Spectatorship, and Critical Information Literacy"
1595:
1500:
588:
74:
1988:
600:
suggested that bias could arise both from the data used in a program, but also from the way a program is coded.
463:
can inaccurately project greater authority than human expertise (in part due to the psychological phenomenon of
7928:
7035:"The Toronto Declaration: Protecting the rights to equality and non-discrimination in machine learning systems"
3216:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
1316:
1312:
978:
committed the same crime. This could potentially mean that a system amplifies the original biases in the data.
6014:
2928:
7082:"AI4PeopleβAn Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations"
6563:
https://venturebeat.com/2018/05/25/microsoft-is-developing-a-tool-to-help-engineers-catch-bias-in-algorithms/
3071:
A Perspectival Mirror of the Elephant: Investigating Language Bias on Google, ChatGPT, Knowledge, and YouTube
7812:
7062:
The Toronto Declaration: Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems
4208:
4888:"Facebook's Secret Censorship Rules Protect White Men From Hate Speech But Not Black Children β ProPublica"
4831:
Skeem J, Lowenkamp C, Risk, Race, & Recidivism: Predictive Bias and Disparate Impact, (June 14, 2016).
619:
An early example of algorithmic bias resulted in as many as 60 women and ethnic minorities denied entry to
4315:
3648:
2954:
7923:
6529:
3997:
3357:
1605:
1354:
1217:
1173:
1065:
In the pretrial detention context, a law review article argues that algorithmic risk assessments violate
941:
find "Andrea". The company said this was the result of an analysis of users' interactions with the site.
491:
456:
261:
212:
109:
7011:
6646:
6226:"Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data"
5212:
1879:
6364:
Kilbertus, Niki; Gascon, Adria; Kusner, Matt; Veale, Michael; Gummadi, Krishna; Weller, Adrian (2018).
5287:"Deep neural networks are more accurate than humans at detecting sexual orientation from facial images"
4462:
1816:
1573:
1368:
1177:
84:
67:
3250:
486:
A 1969 diagram for how a simple computer program makes decisions, illustrating a very simple algorithm
7918:
7208:""We're in a diversity crisis": cofounder of Black in AI on what's poisoning algorithms in our lives"
6542:
Hardt, Moritz; Price, Eric; Srebro, Nathan (2016). "Equality of Opportunity in Supervised Learning".
6199:
4938:
Sap, Maarten; Card, Dallas; Gabriel, Saadia; Choi, Yejin; Smith, Noah A. (July 28 β August 2, 2019).
4316:"The lifecycle of algorithmic decision-making systems: Organizational choices and ethical challenges"
1516:
783:
162:
6899:
5663:"AI language models show bias against people with disabilities, study finds | Penn State University"
2712:
2556:
1398:
situations where the decisions could not be explained or reviewed. Toward this end, a movement for "
794:
jury. Another unintended result of this form of bias was found in the plagiarism-detection software
55:
7889:
6070:
6066:
5621:"Fairness of AI for people with disabilities: problem analysis and interdisciplinary collaboration"
5161:"Did Amazon Really Fail This Weekend? The Twittersphere Says 'Yes,' Online Retailer Says 'Glitch.'"
5108:
3968:
3742:"The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections"
1558:
1544:
1392:
1195:
1090:
771:
286:
2475:
1786:
7938:
4388:
Petersilia, Joan (January 1985). "Racial Disparities in the Criminal Justice System: A Summary".
2216:
1536:
1519:. The original automated decision rules and safeguards found in French law since the late 1970s.
1512:
1453:
1288:
848:
693:
593:
507:
47:
7870:
Understand, Manage, and Prevent Algorithmic Bias: A Guide for Business Users and Data Scientists
3384:"Police are using software to predict crime. Is it a 'holy grail' or biased against minorities?"
3099:
Proceedings of the 16th International Conference on Theory and Practice of Electronic Governance
2331:
The Narrative and the Algorithm: Genres of Credit Reporting from the Nineteenth Century to Today
1331:, and removing the bias from such a system is more difficult. Furthermore, false and accidental
2707:
2330:
1407:
904:
612:
157:
7636:
7568:
6251:
4818:
2357:
1677:. Palgrave Studies in Equity, Diversity, Inclusion, and Indigenization in Business. Springer.
923:
candidate. Facebook users who saw messages related to voting were more likely to vote. A 2010
4939:
3331:
1380:
766:
than the next three, as in an airline price display. Another case is software that relies on
26:
2467:
1778:
1752:
1360:
analyze a program's output and its usefulness and therefore may involve the analysis of its
640:
with other elements of society. Biases may also impact how society shapes itself around the
6959:
6689:
6580:
6383:
3812:
3753:
3570:
3288:
2781:
1857:
1672:
1414:
1296:
1260:
99:
3682:"Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms"
1708:"A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle"
685:
Pre-existing bias in an algorithm is a consequence of underlying social and institutional
8:
4854:
3169:
Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
2468:
1779:
1620:
1465:
1430:
864:
806:
470:
Algorithmic bias has been cited in cases ranging from election outcomes to the spread of
251:
6693:
6387:
5109:"Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification"
4232:"Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification"
3816:
3757:
3574:
3292:
3193:"Gendered Grammar or Ingrained Bias? Exploring Gender Bias in Icelandic Language Models"
2164:
446:. This bias can have impacts ranging from inadvertent privacy violations to reinforcing
7556:
7484:
7456:
7427:"Stanford University launches the Institute for Human-Centered Artificial Intelligence"
7400:
7372:
7143:
7114:
7081:
6543:
6512:
6494:
6434:
6406:
6373:
6346:
6318:
6291:
6173:
6120:"EdgeRank Is Dead: Facebook's News Feed Algorithm Now Has Close To 100K Weight Factors"
5988:
5967:
5942:
5871:
5818:
5797:
5752:
5726:
5644:
5576:
5446:
5415:
5382:
5363:
5063:
5035:
4806:
4677:
4611:
4576:
4552:
4527:
4506:
4413:
4370:
4126:
4108:
4079:
3841:
3800:
3776:
3741:
3599:
3558:
3312:
3219:
3172:
3074:
2977:
2884:
2846:
2811:
2725:
2661:
2603:
2525:
2436:
2403:
2385:
2345:
2239:
2138:
2086:
1839:
1733:
1507:
law. Where they are permitted, there must be safeguards in place, such as a right to a
1300:
1292:
1143:
471:
439:
301:
7515:
6310:
6309:
Chen, Jiahao; Kallus, Nathan; Mao, Xiaojie; Svacha, Geoffry; Udell, Madeleine (2019).
4231:
3526:. Seville, Spain: SCITEPRESS - Science and Technology Publications. pp. 110β118.
3101:. ICEGOV '23. New York, NY, USA: Association for Computing Machinery. pp. 24β32.
2929:"The Invention of "Ethical AI": How Big Tech Manipulates Academia to Avoid Regulation"
2902:
1907:
1018:
Algorithms already have numerous applications in legal systems. An example of this is
21:
7895:
7874:
7632:
7605:
7564:
7488:
7474:
7404:
7390:
7347:
7304:
7281:
7181:
7148:
7119:
7101:
7016:
6963:
6779:
6707:
6516:
6424:
6365:
6336:
6283:
6247:
6091:
5875:
5785:
5775:
5756:
5744:
5648:
5636:
5580:
5568:
5420:
5402:
5367:
5321:
5220:
5067:
5053:
4832:
4814:
4681:
4636:"Racial bias in a medical algorithm favors white patients over sicker black patients"
4616:
4598:
4557:
4539:
4417:
4405:
4374:
4362:
4294:
4266:
3937:
3911:
3846:
3828:
3781:
3677:
3604:
3586:
3537:
3455:
3438:
3304:
3147:
3110:
3039:
2888:
2850:
2815:
2803:
2607:
2530:
2479:
2443:
2353:
2349:
2280:
2243:
2130:
1937:"Amazon Says It Puts Customers First. But Its Pricing Algorithm Doesn't β ProPublica"
1790:
1777:
Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009).
1737:
1723:
1686:
1508:
1276:
1019:
982:
899:
873:
782:
collected. Data may be collected without crucial external context: for example, when
597:
443:
79:
6663:
6350:
6295:
6177:
5383:"Shifting the discourse on disability: Moving to an inclusive, intersectional focus"
4947:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguist
4810:
4191:"Amazon reportedly scraps internal AI recruiting tool that was biased against women"
4130:
3316:
3234:
2729:
2142:
2090:
1843:
7595:
7560:
7548:
7511:
7466:
7382:
7337:
7271:
7109:
7093:
7006:
6996:
6955:
6771:
6697:
6659:
6504:
6438:
6416:
6328:
6275:
6237:
6165:
5861:
5851:
5736:
5628:
5560:
5410:
5394:
5355:
5294:
5045:
5006:
4863:
4798:
4669:
4606:
4593:
4588:
4547:
4397:
4354:
4327:
4118:
4005:
3836:
3820:
3771:
3761:
3594:
3578:
3527:
3450:
3296:
3229:
3139:
3102:
2981:
2969:
2953:
Sergot, MJ; Sadri, F; Kowalski, RA; Kriwaczek, F; Hammond, P; Cory, HT (May 1986).
2876:
2838:
2793:
2717:
2595:
2520:
2512:
2407:
2395:
2337:
2231:
2122:
2078:
1831:
1715:
1714:. EAAMO '21. New York, NY, USA: Association for Computing Machinery. pp. 1β9.
1678:
1361:
1251:
1138:
1069:
945:
924:
844:
604:
503:
217:
152:
137:
7001:
6984:
5564:
4949:. Florence, Italy: Association for Computational Linguistics. pp. 1668β1678.
3938:"Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms"
3138:. CI '23. New York, NY, USA: Association for Computing Machinery. pp. 12β24.
1835:
1383:. A draft of the standard is expected to be submitted for balloting in June 2019.
7868:
7844:. Ministry of Electronics & Information Technology, Government of India. 2018
6420:
6169:
6085:
5027:
3131:
3094:
1884:
1707:
1443:
with black AI leaders pointing to a "diversity crisis" in the field. Groups like
1043:
464:
427:
94:
6985:"Transparent to whom? No algorithmic accountability without a critical audience"
5548:
4401:
2662:"Picturing algorithmic surveillance: the politics of facial recognition systems"
7552:
7342:
7325:
6822:
6702:
6677:
5359:
5011:
4994:
4969:"The algorithms that detect hate speech online are biased against black people"
4358:
4331:
4122:
3801:"A 61-million-person experiment in social influence and political mobilization"
3582:
3192:
1496:
1376:
1259:
to fine-tune algorithms based on user response. For example, the search engine
1232:
990:
514:
7097:
6508:
6279:
5856:
5839:
5838:
Ruggieri, Salvatore; Alvarez, Jose M; Pugnana, Andrea; Turini, Franco (2023).
5496:
4802:
4660:
Bartlett, Robert; Morse, Adair; Stanton, Richard; Wallace, Nancy (June 2019).
4577:"Teaching yourself about structural racism will improve your machine learning"
2721:
2082:
1908:"The Science Behind the Netflix Algorithms That Decide What You'll Watch Next"
1682:
576:
7943:
7912:
7609:
7351:
7285:
7276:
7259:
7185:
7152:
7105:
7020:
6783:
6775:
6287:
6242:
6225:
5789:
5748:
5640:
5572:
5406:
5325:
5224:
4602:
4543:
4505:
Sweeney, Latanya (January 28, 2013). "Discrimination in Online Ad Delivery".
4409:
4366:
3832:
3590:
3532:
3497:
3211:
3043:
2880:
2842:
2807:
2798:
2599:
2516:
2399:
2134:
1569:
1540:
1523:
1272:
881:
860:
811:
757:
147:
7470:
7386:
6332:
5632:
5471:
5314:"LGBT groups denounce 'dangerous' AI that uses your face to guess sexuality"
5298:
5049:
4759:
Angwin, Julia; Larson, Jeff; Mattu, Surya; Kirchner, Lauren (May 23, 2016).
3766:
3300:
3276:
3143:
3106:
1719:
1417:. Software code can be looked into and improvements can be proposed through
7123:
6711:
6576:"Facebook says it has a tool to detect bias in its artificial intelligence"
6454:"EU Data Protection Law May End The Unknowable Algorithm β InformationWeek"
6315:
Proceedings of the Conference on Fairness, Accountability, and Transparency
5424:
4620:
4561:
3993:
3850:
3785:
3608:
3218:. Toronto, Canada: Association for Computational Linguistics: 11737β11762.
1932:
1332:
1328:
1236:
1129:
1051:
847:
by computer scientists and immigration lawyers to evaluate suitability for
787:
523:
291:
7736:"New York City Moves to Create Accountability for Algorithms β ProPublica"
7600:
7583:
5620:
4868:
3718:
3308:
3251:"Algorithm [draft] [#digitalkeywords] β Culture Digitally"
2534:
2235:
2126:
611:
Finally, he noted that machines might also transfer good information with
6874:"White Paper: How to Prevent Discriminatory Outcomes in Machine Learning"
6565:
Microsoft is developing a tool to help engineers catch bias in algorithms
6482:
5963:
4313:
2376:
2341:
1444:
1240:
1120:
954:
648:
637:
320:
305:
7709:
National Artificial Intelligence Research and Development Strategic Plan
6598:
6405:. Lecture Notes in Computer Science. Vol. 10540. pp. 405β415.
5866:
4172:"Amazon scraps secret AI recruiting tool that showed bias against women"
3824:
3093:
Busker, Tony; Choenni, Sunil; Shoae Bargh, Mortaza (November 20, 2023).
1563:
National Artificial Intelligence Research and Development Strategic Plan
6742:
Biased embeddings from wild data: Measuring, understanding and removing
6634:
5398:
1055:
1035:
1031:
1023:
1001:
In 2019, a research study revealed that a healthcare algorithm sold by
965:
767:
657:
641:
544:
7681:"The Administration's Report on the Future of Artificial Intelligence"
7453:
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
7368:
Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society
6757:"Algorithmic Bias: Addressing Growing Concerns [Leading Edge]"
6530:
https://research.google.com/bigpicture/attacking-discrimination-in-ml/
4912:
Angwin, Julia; Varner, Madeleine; Tobin, Ariana (September 14, 2017).
4575:
Robinson, Whitney R; Renson, Audrey; Naimi, Ashley I (April 1, 2020).
2973:
7788:"New York City's Bold, Flawed Attempt to Make Algorithms Accountable"
7582:
Wachter, Sandra; Mittelstadt, Brent; Floridi, Luciano (May 1, 2017).
6647:
https://www.ibm.com/blogs/research/2018/02/mitigating-bias-ai-models/
6485:(September 2, 2019). "The global landscape of AI ethics guidelines".
5966:(2018). "Welfare and Distributional Impacts of Fair Classification".
5940:
2748:"A Speculative Post on the Idea of Algorithmic Authority Clay Shirky"
2110:
2063:
1625:
1320:
1256:
1027:
669:
548:
499:
355:
119:
7813:"India Weighs Comprehensive Data Privacy Bill, Similar to EU's GDPR"
6037:"The Algorithms That Power the Web Are Only Getting More Mysterious"
5740:
4836:
4661:
7461:
7377:
6622:
6548:
6499:
6411:
6378:
6323:
5972:
5947:
5823:
5731:
5348:"Disability, Intersectionality and Deprivation: An Excluded Agenda"
5135:"The Curious Connection Between Apps for Gay Men and Sex Offenders"
5040:
4673:
4113:
4084:
3224:
3177:
3079:
3069:
Luo, Queenie; Puett, Michael J.; Smith, Michael D. (May 23, 2023),
2390:
2305:"Algorithmic Accountability: On the Investigation of Black Boxes |"
1372:
1275:. Treating algorithms as trade secrets protects companies, such as
1081:
937:
795:
686:
560:
540:
192:
114:
5993:
5714:
4511:
3689:
64th Annual Meeting of the International Communication Association
2031:. Media in Transition 8, Cambridge, MA, April 2013. Archived from
6744:. International Symposium on Intelligent Data Analysis. Springer.
6729:. International Symposium on Intelligent Data Analysis. Springer.
6613:
Pymetrics open-sources Audit AI, an algorithm bias detection tool
6012:
5380:
5032:
Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
4914:"Facebook Enabled Advertisers to Reach 'Jew Haters' β ProPublica"
3675:
3557:
Sinha, Ayan; Gleich, David F.; Ramani, Karthik (August 9, 2018).
360:
5523:"4 Ways To Understand The Diversity Of The Disability Community"
3649:"Facebook Is Testing This New Feature to Fight 'Filter Bubbles'"
3559:"Gauss's law for networks directly reveals community boundaries"
6264:
2586:
Introna, Lucas D. (December 2, 2011). "The Enframing of Code".
2466:
Goffrey, Andrew (2008). "Algorithm". In Fuller, Matthew (ed.).
1116:
911:
665:
498:
to generate output. For a rigorous technical introduction, see
7078:
5917:"Building fairness into AI is crucial β and hard to get right"
5816:
4289:
Nakamura, Lisa (2009). Magnet, Shoshana; Gates, Kelly (eds.).
4053:"Study Suggests Google's Ad-Targeting System May Discriminate"
2275:
Gillespie, Tarleton; Boczkowski, Pablo; Foot, Kristin (2014).
7891:
Algorithms of Oppression: How Search Engines Reinforce Racism
6366:"Blind Justice: Fairness with Encrypted Sensitive Attributes"
5844:
Proceedings of the AAAI Conference on Artificial Intelligence
5772:
Algorithms of Oppression: How Search Engines Reinforce Racism
5074:
4263:
Algorithms of Oppression: How Search Engines Reinforce Racism
3472:"Predictive policing only amplifies racial bias, study shows"
3277:"New physicians: A natural experiment in market organization"
3130:
Kotek, Hadas; Dockum, Rikker; Sun, David (November 5, 2023).
2438:
Computer Power and Human Reason: From Judgment to Calculation
1963:"The future of online advertising is big data and algorithms"
1712:
Equity and Access in Algorithms, Mechanisms, and Optimization
1471:
Participatory Approach to enable Capabilities in communiTies,
1403:
1271:
Commercial algorithms are proprietary, and may be treated as
1196:
Algorithms of Oppression: How Search Engines Reinforce Racism
1110:
1062:
documented recidivism over a two-year period of observation.
1002:
986:
423:
6846:"Artificial Intelligence and Machine Learning: Policy Paper"
5837:
4026:"Google starts flagging offensive content in search results"
3624:"Google is finally admitting it has a filter-bubble problem"
2952:
6817:
5025:
4659:
3191:
FriΓ°riksdΓ³ttir, Steinunn Rut; Einarsson, Hafsteinn (2024),
1103:
495:
447:
7450:
7364:
6740:
Sutton, Adam; Welfare, Thomas; Cristianini, Nello (2018).
6363:
2867:
Garcia, Megan (January 1, 2016). "Racist in the Machine".
1776:
1282:
7581:
6678:"AI can be sexist and racist β it's time to make it fair"
5986:
5034:. Association for Computing Machinery. pp. 145β151.
4758:
4727:"Commentary: Bad news. Artificial intelligence is biased"
3998:"Missed Connections: What Search Engines Say about Women"
3740:
Epstein, Robert; Robertson, Ronald E. (August 18, 2015).
3167:
Cheng, Myra; Durmus, Esin; Jurafsky, Dan (May 29, 2023),
3136:
Proceedings of the ACM Collective Intelligence Conference
7234:"Inside the fight to reclaim AI from Big Tech's control"
6739:
6637:
open-sources Audit AI, Aequitas at University of Chicago
6063:
Pandora's Hope: Essays On the Reality of Science Studies
5286:
5213:"A 'Sexist' Search Bug Says More About Us Than Facebook"
5019:
3907:"How LinkedIn's search engine may reflect a gender bias"
3092:
3030:
Evans, Melanie; Mathews, Anna Wilde (October 24, 2019).
1539:
in recital 71, the problem is the non-binding nature of
1459:
1013:
7139:"Opinion | Artificial Intelligence's White Guy Problem"
6400:
5082:"Facial Recognition Is Accurate, if You're a White Guy"
4209:"Reflecting on Spotify's Recommender System β SongData"
3190:
2274:
630:
527:
multiple occurrences, an algorithm can be described as
7705:
6727:
Right for the right reason: Training agnostic networks
6725:
Jia, Sen; Welfare, Thomas; Cristianini, Nello (2018).
6532:
Attacking discrimination with smarter machine learning
6087:
Innovative Methods in Media and Communication Research
5549:"Disability and the problem of lazy intersectionality"
5547:
Watermeyer, Brian; Swartz, Leslie (October 12, 2022).
4314:
Marco Marabelli; Sue Newell; Valerie Handunge (2021).
3132:"Gender bias and stereotypes in Large Language Models"
3006:"To stop algorithmic bias, we first have to define it"
2210:
2208:
2206:
2204:
2202:
2200:
2198:
1817:"Thinking critically about and researching algorithms"
1413:
An initial approach towards transparency included the
1377:
Software & Systems Engineering Standards Committee
6724:
6625:
open source Aequitas: Bias and Fairness Audit Toolkit
5689:"How Algorithmic Bias Hurts People With Disabilities"
5187:"Amazon Apologizes for 'Ham-fisted Cataloging Error'"
3798:
2829:
Lash, Scott (June 30, 2016). "Power after Hegemony".
2369:
2367:
2196:
2194:
2192:
2190:
2188:
2186:
2184:
2182:
2180:
2178:
1785:(3rd ed.). Cambridge, Mass.: MIT Press. p.
1133:, arguing that such associations further stigmatized
6843:
5891:"Why it's so damn hard to make AI fair and unbiased"
4662:"Consumer-Lending Discrimination in the FinTech Era"
4528:"Race, ethnicity and lung function: A brief history"
3032:"Researchers Find Racial Bias in Hospital Algorithm"
1543:. While it has been treated as a requirement by the
1127:. Writer Mike Ananny criticized this association in
7761:"The New York City Council - File #: Int 1696-2017"
6308:
5769:
5285:Wang, Yilun; Kosinski, Michal (February 15, 2017).
4886:Angwin, Julia; Grassegger, Hannes (June 28, 2017).
4881:
4879:
4574:
4230:Buolamwini, Joy; Gebru, Timnit (January 21, 2018).
3209:
3166:
2499:Lowry, Stella; Macpherson, Gordon (March 5, 1988).
1880:"Here's How Your Facebook News Feed Actually Works"
1402:" is already underway within organizations such as
972:
839:technology is more deeply integrated into society.
7298:
6923:
4940:"The Risk of Racial Bias in Hate Speech Detection"
4937:
3931:
3929:
3377:
3375:
2741:
2739:
2435:
2364:
2270:
2268:
2266:
2264:
2262:
2260:
2214:
2175:
884:and being unaware of important or useful content.
559:, offering more confident assessments when larger
547:criteria. Next, programmers assign priorities, or
7706:and Technology Council, National Science (2016).
6983:Kemper, Jakko; Kolkman, Daan (December 6, 2019).
6317:. Atlanta, GA, USA: ACM Press. pp. 339β348.
6268:Health Services and Outcomes Research Methodology
4911:
4905:
4697:"How We Analyzed the COMPAS Recidivism Algorithm"
4344:
3556:
3521:
3496:Lum, Kristian; Isaac, William (October 1, 2016).
2429:
2427:
2425:
2423:
2421:
2419:
2417:
1706:Suresh, Harini; Guttag, John (November 4, 2021).
1561:. In 2016, the Obama administration released the
1406:, for reasons that go beyond the remedy of bias.
16:Technological phenomenon with social implications
7910:
6541:
6480:
6151:"The Politics of Search: A Decade Retrospective"
5546:
4885:
4876:
4223:
4098:
4077:
3739:
3000:
2998:
2955:"The British Nationality Act as a Logic Program"
2215:Friedman, Batya; Nissenbaum, Helen (July 1996).
7655:"Consumer Data Protection Laws, an Ocean Apart"
7299:D'Ignazio, Catherine; Klein, Lauren F. (2020).
6077:
5106:
4436:"Google Photos labeled black people 'gorillas'"
4229:
3935:
3926:
3746:Proceedings of the National Academy of Sciences
3372:
2782:"Governing Algorithms: Myth, Mess, and Methods"
2736:
2498:
2492:
2257:
1850:
7502:Bygrave, Lee A (2001). "Automated Profiling".
7324:Cave, Stephen; Dihal, Kanta (August 6, 2020).
6675:
6083:
3437:Lum, Kristian; Isaac, William (October 2016).
3129:
2655:
2653:
2651:
2649:
2647:
2645:
2643:
2414:
2373:
1452:called for applying a more deliberate lens of
1386:
7292:
7264:Journal of Science and Technology of the Arts
6982:
4754:
4752:
3068:
2995:
1168:and algorithms, creating even more exclusion
1115:In 2011, users of the gay hookup application
400:
25:A flow chart showing the decisions made by a
7622:
7531:
7012:11245.1/75cb1256-5fe5-4724-9a63-03ef66032d8e
6676:Zou, James; Schiebinger, Londa (July 2018).
6370:International Conference on Machine Learning
6198:Szymielewicz, Katarzyna (January 20, 2020).
6197:
6060:
6008:
6006:
6004:
5687:Givens, Alexandra Reeve (February 6, 2020).
5284:
4284:
4282:
3463:
3275:Roth, A. E. 1524β1528. (December 14, 1990).
3095:"Stereotypes in ChatGPT: An empirical study"
3029:
2557:"Want Less-Biased Decisions? Use Algorithms"
2298:
2296:
1705:
1162:
7672:
7174:"AI Is the FutureβBut Where Are the Women?"
6952:Transparency: The Key to Better Governance?
4847:
4694:
4044:
3671:
3669:
3489:
3336:: CS1 maint: numeric names: authors list (
2775:
2773:
2693:
2691:
2689:
2687:
2659:
2640:
2581:
2579:
2577:
1960:
1954:
1931:
1925:
1753:"What is an Algorithm? β Culture Digitally"
1616:Misaligned goals in artificial intelligence
7648:
7646:
7424:
6223:
6144:
6142:
6140:
6084:Kubitschko, Sebastian; Kaun, Anne (2016).
5802:: CS1 maint: location missing publisher (
4749:
4695:Jeff Larson, Julia Angwin (May 23, 2016).
4387:
4146:"Boffins bash Google Translate for sexism"
3702:
2946:
2461:
2459:
2433:
1905:
1674:AI, Ethics, and Discrimination in Business
1437:
1111:Discrimination against the LGBTQ community
936:In 2016, the professional networking site
770:for fair distributions of results. If the
407:
393:
7839:"The Personal Data Protection Bill, 2018"
7727:
7599:
7460:
7376:
7341:
7323:
7275:
7113:
7010:
7000:
6813:"P7003 - Algorithmic Bias Considerations"
6701:
6547:
6498:
6410:
6377:
6322:
6241:
6035:LaFrance, Adrienne (September 18, 2015).
6028:
6022:Data and Discrimination: Collected Essays
6001:
5992:
5971:
5946:
5934:
5865:
5855:
5822:
5770:Noble, Safiya Umoja (February 20, 2018).
5730:
5625:ACM SIGACCESS Accessibility and Computing
5437:
5414:
5039:
5010:
4867:
4610:
4592:
4551:
4510:
4279:
4261:Noble, Safiya Umoja (February 20, 2018).
4112:
4083:
3840:
3792:
3775:
3765:
3598:
3531:
3454:
3406:
3404:
3248:
3242:
3233:
3223:
3176:
3078:
2862:
2860:
2797:
2711:
2524:
2389:
2328:
2322:
2293:
2104:
2102:
2100:
2057:
2055:
2053:
2019:
2017:
2015:
2013:
2011:
2009:
1899:
1810:
1808:
1806:
1670:
7733:
7616:
7532:Veale, Michael; Edwards, Lilian (2018).
7136:
6989:Information, Communication & Society
6445:
6034:
5914:
5713:Morris, Meredith Ringel (May 22, 2020).
5210:
5116:Proceedings of Machine Learning Research
4986:
4966:
4785:Harcourt, Bernard (September 16, 2010).
4784:
4778:
4634:Johnson, Carolyn Y. (October 24, 2019).
4429:
4427:
4320:Journal of Strategic Information Systems
4288:
4236:Proceedings of Machine Learning Research
3863:
3857:
3666:
3495:
3436:
3430:
3381:
2926:
2920:
2770:
2684:
2574:
2474:. Cambridge, Mass.: MIT Press. pp.
2162:
1871:
1824:Information, Communication & Society
1750:
1203:
1048:criminal sentencing in the United States
944:In 2012, the department store franchise
931:
892:
756:
703:
575:
481:
20:
7894:. New York: New York University Press.
7779:
7643:
7501:
7425:University, Stanford (March 18, 2019).
7068:. Human Rights Watch. 2018. p. 15.
6844:The Internet Society (April 18, 2017).
6733:
6451:
6149:Granka, Laura A. (September 27, 2010).
6137:
5915:Fioretto, Ferdinando (March 19, 2024).
5152:
5126:
5107:Buolamwini, Joy; Gebru, Timnit (2018).
4633:
4532:Canadian Journal of Respiratory Therapy
4504:
4498:
4488:"Alexa does not understand your accent"
4293:. London: Routledge. pp. 149β162.
4188:
4143:
4059:. Massachusetts Institute of Technology
3936:Crawford, Kate; Schultz, Jason (2014).
3733:
3353:"Can A.I. Be Taught to Explain Itself?"
2927:Ochigame, Rodrigo (December 20, 2019).
2786:Science, Technology, & Human Values
2779:
2697:
2585:
2465:
2456:
2302:
2279:. Cambridge: MIT Press. pp. 1β30.
2224:ACM Transactions on Information Systems
1986:
1980:
1814:
1744:
1645:Jacobi, Jennifer (September 13, 2001).
1611:Hallucination (artificial intelligence)
1283:Lack of data about sensitive categories
1266:
7911:
7734:Kirchner, Lauren (December 18, 2017).
7652:
7527:
7525:
7257:
6868:
6866:
6476:
6474:
6224:Veale, Michael; Binns, Reuben (2017).
6148:
6111:
5980:
5961:
5888:
5712:
5686:
5258:
5211:Matsakis, Louise (February 22, 2019).
5178:
5132:
4254:
4169:
4144:Claburn, Thomas (September 10, 2018).
4017:
3966:
3960:
3498:"FAQs on Predictive Policing and Bias"
3410:
3401:
3382:Jouvenal, Justin (November 17, 2016).
2866:
2857:
2620:
2614:
2554:
2548:
2329:Lipartito, Kenneth (January 6, 2011).
2108:
2097:
2061:
2050:
2006:
1803:
1644:
1022:, a commercial program widely used by
431:
7934:Philosophy of artificial intelligence
7887:
7699:
7678:
7414:– via ezpa.library.ualberta.ca.
6949:
6900:"Explainable Artificial Intelligence"
6754:
6117:
5618:
5438:Whittaker, Meredith (November 2019).
5311:
4956:from the original on August 14, 2019.
4737:from the original on January 12, 2019
4525:
4433:
4424:
4260:
4023:
3992:
3986:
3967:Duhigg, Charles (February 16, 2012).
3676:Sandvig, Christian; Hamilton, Kevin;
3469:
3350:
3344:
3249:Gillespie, Tarleton (June 25, 2014).
3064:
3062:
3060:
1935:; Mattu, Surya (September 20, 2016).
1877:
1770:
1460:Interdisciplinarity and Collaboration
1250:Additional complexity occurs through
1125:related homosexuality with pedophilia
1075:
1014:Law enforcement and legal proceedings
833:
7866:
7753:
7653:Singer, Natasha (February 2, 2013).
6960:10.5871/bacad/9780197263839.003.0002
6764:IEEE Technology and Society Magazine
6635:https://dsapp.uchicago.edu/aequitas/
5619:White, Jason J. G. (March 2, 2020).
4992:
4967:Ghaffary, Shirin (August 15, 2019).
4463:"Are Face-Detection Cameras Racist?"
4460:
4454:
4050:
3708:
3411:Chamma, Maurice (February 3, 2016).
3274:
3268:
2828:
2822:
2660:Introna, Lucas; Wood, David (2004).
1211:
821:Additional emergent biases include:
631:Contemporary critiques and responses
621:St. George's Hospital Medical School
422:describes systematic and repeatable
7522:
7258:Ciston, Sarah (December 29, 2019).
7231:
6863:
6588:from the original on March 5, 2023.
6471:
5955:
5520:
5387:American Journal of Orthopsychiatry
5352:Psychology and Developing Societies
5345:
4707:from the original on April 29, 2019
4189:Vincent, James (October 10, 2018).
4170:Dastin, Jeffrey (October 9, 2018).
3904:
3898:
3680:; Langbort, Cedric (May 22, 2014).
3621:
1961:Livingstone, Rob (March 13, 2017).
1429:From a regulatory perspective, the
13:
7860:
7785:
7679:Obama, Barack (October 12, 2016).
7541:Computer Law & Security Review
7504:Computer Law & Security Review
6950:Heald, David (September 7, 2006).
3969:"How Companies Learn Your Secrets"
3351:Kuang, Cliff (November 21, 2017).
3057:
2780:Ziewitz, Malte (January 1, 2016).
2745:
2062:Graham, Stephen D.N. (July 2016).
2023:
1906:Vanderbilt, Tom (August 7, 2013).
1815:Kitchin, Rob (February 25, 2016).
1493:General Data Protection Regulation
1424:
1305:General Data Protection Regulation
917:
571:
453:General Data Protection Regulation
54:
14:
7955:
6452:Claburn, Thomas (July 18, 2016).
5184:
5158:
4101:Neural Computing and Applications
4024:Guynn, Jessica (March 16, 2017).
2700:Ethics and Information Technology
2555:Miller, Alex P. (July 26, 2018).
2115:Portal: Libraries and the Academy
1601:Ethics of artificial intelligence
854:
743:
7831:
7805:
7625:Duke Law & Technology Review
7575:
7495:
7444:
7418:
7358:
7317:
7260:"Intersectional AI Is Essential"
7251:
7225:
7205:
7199:
7166:
7137:Crawford, Kate (June 25, 2016).
7130:
7072:
7053:
7027:
6976:
6943:
6917:
6892:
6837:
6805:
6748:
6718:
6669:
6652:
6640:
6628:
6623:https://github.com/dssg/aequitas
6616:
6604:
6592:
6568:
6556:
6535:
6523:
6394:
6357:
6302:
6258:
6217:
6191:
6054:
5908:
5889:Samuel, Sigal (April 19, 2022).
5882:
5831:
5810:
5763:
5706:
5680:
5655:
5612:
5587:
5540:
5514:
5489:
5464:
5431:
5374:
5346:Pal, G.C. (September 16, 2011).
5339:
5312:Levin, Sam (September 9, 2017).
5305:
5278:
5259:Samuel, Sigal (April 19, 2019).
5252:
5238:
5204:
5100:
3711:"The Anatomy of a Search Engine"
3456:10.1111/j.1740-9713.2016.00960.x
2621:Bogost, Ian (January 15, 2015).
2163:Crawford, Kate (April 1, 2013).
1551:
1188:
973:Racial and ethnic discrimination
949:the privacy of those customers.
870:Human Rights Data Analysis Group
859:Emergent bias may also create a
712:
459:(proposed 2021, approved 2024).
6664:10.1109/COMPSAC48688.2020.00-95
6118:McGee, Matt (August 16, 2013).
5133:Ananny, Mike (April 14, 2011).
4960:
4931:
4841:
4825:
4719:
4688:
4653:
4627:
4568:
4519:
4480:
4461:Rose, Adam (January 22, 2010).
4434:Guynn, Jessica (July 1, 2015).
4381:
4338:
4307:
4201:
4182:
4163:
4137:
4092:
4071:
3641:
3615:
3550:
3515:
3470:Smith, Jack (October 9, 2016).
3203:
3184:
3160:
3123:
3086:
3023:
2895:
2442:. San Francisco: W.H. Freeman.
2165:"The Hidden Biases in Big Data"
2156:
2109:Tewell, Eamon (April 4, 2016).
1989:"How algorithms rule the world"
1751:Striphas, Ted (February 2012).
1596:Algorithmic wage discrimination
1379:, a committee chartered by the
1239:has identified this process as
1096:
824:
730:
680:
589:Computer Power and Human Reason
75:Artificial general intelligence
7588:International Data Privacy Law
6926:"The responsible AI framework"
6481:Jobin, Anna; Ienca, Marcello;
5261:"Some AI just shouldn't exist"
5246:"Some AI just shouldn't exist"
4848:Thomas, C.; Nunez, A. (2022).
3709:Brin, Sergey; Page, Lawrence.
2623:"The Cathedral of Computation"
1699:
1664:
1638:
1419:source-code-hosting facilities
1348:
1317:secure multi-party computation
1313:privacy-enhancing technologies
1026:to assess the likelihood of a
721:
477:
1:
7516:10.1016/s0267-3649(01)00104-2
7002:10.1080/1369118X.2018.1477967
5595:"Disability Data Report 2021"
5565:10.1080/09687599.2022.2130177
4291:The New Media of Surveillance
3905:Day, Matt (August 31, 2016).
3235:10.18653/v1/2023.acl-long.656
2903:"ACM FAccT 2021 Registration"
2831:Theory, Culture & Society
2588:Theory, Culture & Society
2064:"Software-sorted geographies"
1987:Hickman, Leo (July 1, 2013).
1836:10.1080/1369118X.2016.1154087
1631:
1481:
1226:
994:for speakers of that accent.
30:
7888:Noble, Safiya Umoja (2018).
7232:Hao, Karen (June 14, 2021).
6649:Mitigating Bias in AI Models
6421:10.1007/978-3-319-67256-4_32
6311:"Fairness Under Unawareness"
6170:10.1080/01972243.2010.511560
4666:NBER Working Paper No. 25943
4594:10.1093/biostatistics/kxz040
3622:Hao, Karen (February 2018).
1339:
752:
543:according to human-designed
7:
7330:Philosophy & Technology
6755:Koene, Ansgar (June 2017).
6487:Nature Machine Intelligence
4791:Federal Sentencing Reporter
4761:"Machine Bias β ProPublica"
4402:10.1177/0011128785031001002
3973:The New York Times Magazine
3864:Zittrain, Jonathan (2014).
3358:The New York Times Magazine
2470:Software Studies: A Lexicon
2434:Weizenbaum, Joseph (1976).
2071:Progress in Human Geography
1606:Fairness (machine learning)
1589:
1415:open-sourcing of algorithms
1387:Transparency and monitoring
1355:Fairness (machine learning)
1218:Fairness (machine learning)
1137:. In 2009, online retailer
910:In a 1998 paper describing
801:
457:Artificial Intelligence Act
110:Natural language processing
10:
7960:
7553:10.1016/j.clsr.2017.12.002
7343:10.1007/s13347-020-00415-6
6703:10.1038/d41586-018-05707-8
5599:Disability Data Initiative
5440:"Disability, Bias, and AI"
5360:10.1177/097133361102300202
5122:: 1 – via MLR Press.
5012:10.1207/s15516709cog2606_4
4787:"Risk as a Proxy for Race"
4359:10.1177/002193479702800106
4332:10.1016/j.jsis.2021.101683
4123:10.1007/s00521-019-04144-6
3583:10.1038/s41598-018-30401-0
2666:Surveillance & Society
2501:"A Blot on the Profession"
2217:"Bias in Computer Systems"
1781:Introduction to Algorithms
1574:algorithmic accountability
1469:in AI research is PACT or
1390:
1352:
1231:Algorithmic processes are
1215:
1178:social model of disability
1042:One example is the use of
779:decontextualized algorithm
566:
534:
163:Hybrid intelligent systems
85:Recursive self-improvement
7098:10.1007/s11023-018-9482-5
6509:10.1038/s42256-019-0088-2
6280:10.1007/s10742-009-0047-1
5857:10.1609/aaai.v37i13.26798
5719:Communications of the ACM
4993:Furl, N (December 2002).
4803:10.1525/fsr.2015.27.4.237
3942:Boston College Law Review
3879:: 335β341. Archived from
3866:"Engineering an Election"
2962:Communications of the ACM
2722:10.1007/s10676-006-9133-z
2083:10.1191/0309132505ph568oa
1858:"How Google Search Works"
1683:10.1007/978-3-031-53919-0
1671:Marabelli, Marco (2024).
1517:Data Protection Directive
1486:
1311:drawn upon cryptographic
1163:Disability discrimination
887:
7765:legistar.council.nyc.gov
7277:10.7559/citarj.v11i2.665
6924:PricewaterhouseCoopers.
6776:10.1109/mts.2017.2697080
6243:10.1177/2053951717743530
6071:Harvard University Press
6067:Cambridge, Massachusetts
5553:Disability & Society
4668:. Working Paper Series.
4347:Journal of Black Studies
3873:Harvard Law Review Forum
3533:10.5220/0006938301100118
2881:10.1215/07402775-3813015
2843:10.1177/0263276407075956
2799:10.1177/0162243915608948
2600:10.1177/0263276411418131
2517:10.1136/bmj.296.6623.657
2400:10.1609/aimag.v38i3.2741
2336:(Submitted manuscript).
2073:(Submitted manuscript).
1580:
1559:Federal Trade Commission
1545:Article 29 Working Party
1408:Price Waterhouse Coopers
1393:Algorithmic transparency
1091:African American English
772:random number generation
675:
455:(proposed 2018) and the
287:Artificial consciousness
7767:. New York City Council
7471:10.1145/3461702.3462612
7387:10.1145/3461702.3462612
6333:10.1145/3287560.3287594
6158:The Information Society
5840:"Can We Trust Fair-AI?"
5633:10.1145/3386296.3386299
5476:Disability is Diversity
5050:10.1145/3375627.3375820
4390:Crime & Delinquency
4265:. New York: NYU Press.
3767:10.1073/pnas.1419828112
3439:"To predict and serve?"
3301:10.1126/science.2274783
3144:10.1145/3582269.3615599
3107:10.1145/3614321.3614325
2561:Harvard Business Review
2505:British Medical Journal
2303:Diakopoulos, Nicholas.
2169:Harvard Business Review
1720:10.1145/3465416.3483305
1537:right to an explanation
1513:right to an explanation
1438:Diversity and inclusion
1289:anti-discrimination law
694:British Nationality Act
613:unintended consequences
594:artificial intelligence
508:artificial intelligence
158:Evolutionary algorithms
48:Artificial intelligence
6236:(2): 205395171774353.
6230:Big Data & Society
6200:"Black-Boxed Politics"
5715:"AI and accessibility"
1647:"Patent #US2001021914"
1533:
905:United States Congress
762:
603:Weizenbaum wrote that
582:
487:
444:social media platforms
59:
36:
7929:Computing and society
7867:Baer, Tobias (2019).
7326:"The Whiteness of AI"
7238:MIT Technology Review
7212:MIT Technology Review
6061:Bruno Latour (1999).
5299:10.17605/OSF.IO/ZN79K
4869:10.24926/25730037.649
4526:Braun, Lundy (2015).
4057:MIT Technology Review
3413:"Policing the Future"
2236:10.1145/230538.230561
2127:10.1353/pla.2016.0017
1535:Like the non-binding
1528:
1391:Further information:
1381:IEEE Computer Society
1353:Further information:
1293:device fingerprinting
1204:Obstacles to research
932:Gender discrimination
893:Commercial influences
760:
704:Machine learning bias
579:
485:
440:search engine results
58:
27:recommendation engine
24:
7873:. New York: Apress.
7455:. pp. 425β436.
7371:. pp. 425β436.
6878:World Economic Forum
6015:"An Algorithm Audit"
5370:– via Sagepub.
4855:Law & Inequality
4733:. January 12, 2019.
3417:The Marshall Project
3255:culturedigitally.org
2869:World Policy Journal
2342:10.2139/SSRN.1736283
2026:"Knowing Algorithms"
1757:culturedigitally.org
1511:, and a non-binding
1297:ubiquitous computing
1267:Lack of transparency
100:General game playing
7687:. National Archives
7601:10.1093/idpl/ipx005
6954:. British Academy.
6825:on December 3, 2018
6694:2018Natur.559..324Z
6388:2018arXiv180603281K
3825:10.1038/nature11421
3817:2012Natur.489..295B
3758:2015PNAS..112E4512E
3752:(33): E4512βE4521.
3575:2018NatSR...811909S
3293:1990Sci...250.1524R
3287:(4987): 1524β1528.
3036:Wall Street Journal
2038:on December 1, 2017
1878:Luckerson, Victor.
1621:Predictive policing
1466:interdisciplinarity
1431:Toronto Declaration
865:predictive policing
849:British citizenship
492:difficult to define
252:Machine translation
168:Systems integration
105:Knowledge reasoning
42:Part of a series on
7924:Information ethics
7659:The New York Times
7144:The New York Times
7086:Minds and Machines
7039:Human Rights Watch
6601:Pymetrics audit-ai
6403:Social Informatics
5850:(13): 5421β15430.
5497:"Microsoft Design"
5399:10.1037/ort0000653
5354:. 23(2), 159β176.
5088:. February 9, 2018
5086:The New York Times
3678:Karahalios, Karrie
3563:Scientific Reports
2277:Media Technologies
1301:Internet of Things
1144:Brokeback Mountain
1119:reported that the
1076:Online hate speech
834:Unanticipated uses
784:facial recognition
763:
583:
488:
472:online hate speech
60:
37:
7817:Insurance Journal
6995:(14): 2081β2096.
6969:978-0-19-726383-9
6688:(7714): 324β326.
6430:978-3-319-67255-7
6097:978-3-319-40700-5
5521:Pulrang, Andrew.
5501:www.microsoft.com
5452:on March 27, 2023
5248:. April 19, 2019.
4999:Cognitive Science
4300:978-0-415-56812-8
4107:(10): 6363β6381.
3912:The Seattle Times
3153:979-8-4007-0113-9
3116:979-8-4007-0742-1
2974:10.1145/5689.5920
2907:fatconference.org
2758:on March 15, 2012
2485:978-1-4356-4787-9
2449:978-0-7167-0464-5
1796:978-0-262-03384-8
1729:978-1-4503-8553-4
1692:978-3-031-53918-3
1572:passed the first
1509:human-in-the-loop
1454:intersectionality
1367:Currently, a new
1212:Defining fairness
900:American Airlines
598:Joseph Weizenbaum
515:social scientists
417:
416:
153:Bayesian networks
80:Intelligent agent
7951:
7919:Machine learning
7905:
7884:
7854:
7853:
7851:
7849:
7843:
7835:
7829:
7828:
7826:
7824:
7809:
7803:
7802:
7800:
7798:
7783:
7777:
7776:
7774:
7772:
7757:
7751:
7750:
7748:
7746:
7731:
7725:
7724:
7722:
7720:
7714:
7703:
7697:
7696:
7694:
7692:
7676:
7670:
7669:
7667:
7665:
7650:
7641:
7640:
7620:
7614:
7613:
7603:
7579:
7573:
7572:
7538:
7529:
7520:
7519:
7499:
7493:
7492:
7464:
7448:
7442:
7441:
7439:
7437:
7422:
7416:
7415:
7413:
7411:
7380:
7362:
7356:
7355:
7345:
7321:
7315:
7314:
7296:
7290:
7289:
7279:
7255:
7249:
7248:
7246:
7244:
7229:
7223:
7222:
7220:
7218:
7203:
7197:
7196:
7194:
7192:
7170:
7164:
7163:
7161:
7159:
7134:
7128:
7127:
7117:
7076:
7070:
7069:
7067:
7057:
7051:
7050:
7048:
7046:
7031:
7025:
7024:
7014:
7004:
6980:
6974:
6973:
6947:
6941:
6940:
6938:
6936:
6921:
6915:
6914:
6912:
6910:
6896:
6890:
6889:
6887:
6885:
6880:. March 12, 2018
6870:
6861:
6860:
6858:
6856:
6850:Internet Society
6841:
6835:
6834:
6832:
6830:
6821:. Archived from
6809:
6803:
6802:
6800:
6798:
6793:on July 19, 2018
6792:
6786:. Archived from
6761:
6752:
6746:
6745:
6737:
6731:
6730:
6722:
6716:
6715:
6705:
6673:
6667:
6656:
6650:
6644:
6638:
6632:
6626:
6620:
6614:
6608:
6602:
6596:
6590:
6589:
6572:
6566:
6560:
6554:
6553:
6551:
6539:
6533:
6527:
6521:
6520:
6502:
6478:
6469:
6468:
6466:
6464:
6449:
6443:
6442:
6414:
6398:
6392:
6391:
6381:
6361:
6355:
6354:
6326:
6306:
6300:
6299:
6262:
6256:
6255:
6245:
6221:
6215:
6214:
6212:
6210:
6195:
6189:
6188:
6186:
6184:
6155:
6146:
6135:
6134:
6132:
6130:
6115:
6109:
6108:
6106:
6104:
6081:
6075:
6074:
6058:
6052:
6051:
6049:
6047:
6032:
6026:
6025:
6019:
6010:
5999:
5998:
5996:
5984:
5978:
5977:
5975:
5959:
5953:
5952:
5950:
5938:
5932:
5931:
5929:
5927:
5921:The Conversation
5912:
5906:
5905:
5903:
5901:
5886:
5880:
5879:
5869:
5859:
5835:
5829:
5828:
5826:
5814:
5808:
5807:
5801:
5793:
5767:
5761:
5760:
5734:
5710:
5704:
5703:
5701:
5699:
5684:
5678:
5677:
5675:
5673:
5659:
5653:
5652:
5616:
5610:
5609:
5607:
5605:
5591:
5585:
5584:
5544:
5538:
5537:
5535:
5533:
5518:
5512:
5511:
5509:
5507:
5493:
5487:
5486:
5484:
5482:
5468:
5462:
5461:
5459:
5457:
5451:
5445:. Archived from
5444:
5435:
5429:
5428:
5418:
5378:
5372:
5371:
5343:
5337:
5336:
5334:
5332:
5309:
5303:
5302:
5282:
5276:
5275:
5273:
5271:
5256:
5250:
5249:
5242:
5236:
5235:
5233:
5231:
5208:
5202:
5201:
5199:
5197:
5182:
5176:
5175:
5173:
5171:
5156:
5150:
5149:
5147:
5145:
5130:
5124:
5123:
5113:
5104:
5098:
5097:
5095:
5093:
5078:
5072:
5071:
5043:
5023:
5017:
5016:
5014:
4990:
4984:
4983:
4981:
4979:
4964:
4958:
4957:
4955:
4944:
4935:
4929:
4928:
4926:
4924:
4909:
4903:
4902:
4900:
4898:
4883:
4874:
4873:
4871:
4845:
4839:
4829:
4823:
4822:
4782:
4776:
4775:
4773:
4771:
4756:
4747:
4746:
4744:
4742:
4723:
4717:
4716:
4714:
4712:
4692:
4686:
4685:
4657:
4651:
4650:
4648:
4646:
4631:
4625:
4624:
4614:
4596:
4572:
4566:
4565:
4555:
4523:
4517:
4516:
4514:
4502:
4496:
4495:
4484:
4478:
4477:
4475:
4473:
4458:
4452:
4451:
4449:
4447:
4431:
4422:
4421:
4385:
4379:
4378:
4342:
4336:
4335:
4311:
4305:
4304:
4286:
4277:
4276:
4258:
4252:
4251:
4249:
4247:
4227:
4221:
4220:
4218:
4216:
4205:
4199:
4198:
4186:
4180:
4179:
4167:
4161:
4160:
4158:
4156:
4141:
4135:
4134:
4116:
4096:
4090:
4089:
4087:
4075:
4069:
4068:
4066:
4064:
4048:
4042:
4041:
4039:
4037:
4021:
4015:
4014:
4002:
3990:
3984:
3983:
3981:
3979:
3964:
3958:
3957:
3955:
3953:
3933:
3924:
3923:
3921:
3919:
3902:
3896:
3895:
3893:
3891:
3886:on March 4, 2021
3885:
3870:
3861:
3855:
3854:
3844:
3796:
3790:
3789:
3779:
3769:
3737:
3731:
3730:
3728:
3726:
3717:. Archived from
3706:
3700:
3699:
3697:
3695:
3686:
3673:
3664:
3663:
3661:
3659:
3645:
3639:
3638:
3636:
3634:
3619:
3613:
3612:
3602:
3554:
3548:
3547:
3535:
3519:
3513:
3512:
3510:
3508:
3493:
3487:
3486:
3484:
3482:
3467:
3461:
3460:
3458:
3434:
3428:
3427:
3425:
3423:
3408:
3399:
3398:
3396:
3394:
3379:
3370:
3369:
3367:
3365:
3348:
3342:
3341:
3335:
3327:
3325:
3323:
3272:
3266:
3265:
3263:
3261:
3246:
3240:
3239:
3237:
3227:
3207:
3201:
3200:
3197:Lrec-Coling 2024
3188:
3182:
3181:
3180:
3164:
3158:
3157:
3127:
3121:
3120:
3090:
3084:
3083:
3082:
3066:
3055:
3054:
3052:
3050:
3027:
3021:
3020:
3018:
3016:
3002:
2993:
2992:
2990:
2988:
2959:
2950:
2944:
2943:
2941:
2939:
2924:
2918:
2917:
2915:
2913:
2899:
2893:
2892:
2864:
2855:
2854:
2826:
2820:
2819:
2801:
2777:
2768:
2767:
2765:
2763:
2754:. Archived from
2743:
2734:
2733:
2715:
2695:
2682:
2681:
2679:
2677:
2657:
2638:
2637:
2635:
2633:
2618:
2612:
2611:
2583:
2572:
2571:
2569:
2567:
2552:
2546:
2545:
2543:
2541:
2528:
2496:
2490:
2489:
2473:
2463:
2454:
2453:
2441:
2431:
2412:
2411:
2393:
2371:
2362:
2361:
2335:
2326:
2320:
2319:
2317:
2315:
2300:
2291:
2290:
2272:
2255:
2254:
2252:
2250:
2221:
2212:
2173:
2172:
2160:
2154:
2153:
2151:
2149:
2106:
2095:
2094:
2068:
2059:
2048:
2047:
2045:
2043:
2037:
2030:
2021:
2004:
2003:
2001:
1999:
1984:
1978:
1977:
1975:
1973:
1967:The Conversation
1958:
1952:
1951:
1949:
1947:
1929:
1923:
1922:
1920:
1918:
1903:
1897:
1896:
1894:
1892:
1875:
1869:
1868:
1866:
1864:
1854:
1848:
1847:
1821:
1812:
1801:
1800:
1784:
1774:
1768:
1767:
1765:
1763:
1748:
1742:
1741:
1703:
1697:
1696:
1668:
1662:
1661:
1659:
1657:
1642:
1362:confusion matrix
1252:machine learning
1070:Equal Protection
1044:risk assessments
925:randomized trial
845:proof-of-concept
557:uncertainty bias
520:algorithmic bias
504:machine learning
420:Algorithmic bias
409:
402:
395:
316:Existential risk
138:Machine learning
39:
38:
35:
32:
7959:
7958:
7954:
7953:
7952:
7950:
7949:
7948:
7909:
7908:
7902:
7881:
7863:
7861:Further reading
7858:
7857:
7847:
7845:
7841:
7837:
7836:
7832:
7822:
7820:
7819:. July 31, 2018
7811:
7810:
7806:
7796:
7794:
7786:Powles, Julia.
7784:
7780:
7770:
7768:
7759:
7758:
7754:
7744:
7742:
7732:
7728:
7718:
7716:
7715:. US Government
7712:
7704:
7700:
7690:
7688:
7677:
7673:
7663:
7661:
7651:
7644:
7621:
7617:
7580:
7576:
7536:
7530:
7523:
7500:
7496:
7481:
7449:
7445:
7435:
7433:
7423:
7419:
7409:
7407:
7397:
7363:
7359:
7322:
7318:
7311:
7297:
7293:
7256:
7252:
7242:
7240:
7230:
7226:
7216:
7214:
7204:
7200:
7190:
7188:
7172:
7171:
7167:
7157:
7155:
7135:
7131:
7077:
7073:
7065:
7059:
7058:
7054:
7044:
7042:
7033:
7032:
7028:
6981:
6977:
6970:
6948:
6944:
6934:
6932:
6922:
6918:
6908:
6906:
6898:
6897:
6893:
6883:
6881:
6872:
6871:
6864:
6854:
6852:
6842:
6838:
6828:
6826:
6811:
6810:
6806:
6796:
6794:
6790:
6759:
6753:
6749:
6738:
6734:
6723:
6719:
6674:
6670:
6657:
6653:
6645:
6641:
6633:
6629:
6621:
6617:
6609:
6605:
6597:
6593:
6584:. May 3, 2018.
6574:
6573:
6569:
6561:
6557:
6540:
6536:
6528:
6524:
6479:
6472:
6462:
6460:
6458:InformationWeek
6450:
6446:
6431:
6399:
6395:
6362:
6358:
6343:
6307:
6303:
6263:
6259:
6222:
6218:
6208:
6206:
6196:
6192:
6182:
6180:
6153:
6147:
6138:
6128:
6126:
6116:
6112:
6102:
6100:
6098:
6082:
6078:
6059:
6055:
6045:
6043:
6033:
6029:
6017:
6011:
6002:
5985:
5981:
5960:
5956:
5939:
5935:
5925:
5923:
5913:
5909:
5899:
5897:
5887:
5883:
5836:
5832:
5815:
5811:
5795:
5794:
5782:
5768:
5764:
5741:10.1145/3356727
5711:
5707:
5697:
5695:
5685:
5681:
5671:
5669:
5661:
5660:
5656:
5617:
5613:
5603:
5601:
5593:
5592:
5588:
5545:
5541:
5531:
5529:
5519:
5515:
5505:
5503:
5495:
5494:
5490:
5480:
5478:
5470:
5469:
5465:
5455:
5453:
5449:
5442:
5436:
5432:
5379:
5375:
5344:
5340:
5330:
5328:
5310:
5306:
5283:
5279:
5269:
5267:
5257:
5253:
5244:
5243:
5239:
5229:
5227:
5209:
5205:
5195:
5193:
5183:
5179:
5169:
5167:
5157:
5153:
5143:
5141:
5131:
5127:
5111:
5105:
5101:
5091:
5089:
5080:
5079:
5075:
5060:
5024:
5020:
4991:
4987:
4977:
4975:
4965:
4961:
4953:
4942:
4936:
4932:
4922:
4920:
4910:
4906:
4896:
4894:
4884:
4877:
4846:
4842:
4830:
4826:
4783:
4779:
4769:
4767:
4757:
4750:
4740:
4738:
4725:
4724:
4720:
4710:
4708:
4693:
4689:
4658:
4654:
4644:
4642:
4640:Washington Post
4632:
4628:
4573:
4569:
4524:
4520:
4503:
4499:
4492:Washington Post
4486:
4485:
4481:
4471:
4469:
4459:
4455:
4445:
4443:
4432:
4425:
4386:
4382:
4343:
4339:
4312:
4308:
4301:
4287:
4280:
4273:
4259:
4255:
4245:
4243:
4228:
4224:
4214:
4212:
4207:
4206:
4202:
4187:
4183:
4168:
4164:
4154:
4152:
4142:
4138:
4097:
4093:
4076:
4072:
4062:
4060:
4051:Simonite, Tom.
4049:
4045:
4035:
4033:
4022:
4018:
4000:
3991:
3987:
3977:
3975:
3965:
3961:
3951:
3949:
3934:
3927:
3917:
3915:
3903:
3899:
3889:
3887:
3883:
3868:
3862:
3858:
3811:(7415): 295β8.
3797:
3793:
3738:
3734:
3724:
3722:
3721:on July 2, 2019
3715:www7.scu.edu.au
3707:
3703:
3693:
3691:
3684:
3674:
3667:
3657:
3655:
3647:
3646:
3642:
3632:
3630:
3620:
3616:
3555:
3551:
3544:
3520:
3516:
3506:
3504:
3494:
3490:
3480:
3478:
3468:
3464:
3435:
3431:
3421:
3419:
3409:
3402:
3392:
3390:
3388:Washington Post
3380:
3373:
3363:
3361:
3349:
3345:
3329:
3328:
3321:
3319:
3273:
3269:
3259:
3257:
3247:
3243:
3208:
3204:
3189:
3185:
3165:
3161:
3154:
3128:
3124:
3117:
3091:
3087:
3067:
3058:
3048:
3046:
3028:
3024:
3014:
3012:
3004:
3003:
2996:
2986:
2984:
2957:
2951:
2947:
2937:
2935:
2925:
2921:
2911:
2909:
2901:
2900:
2896:
2865:
2858:
2827:
2823:
2778:
2771:
2761:
2759:
2744:
2737:
2713:10.1.1.154.1313
2696:
2685:
2675:
2673:
2658:
2641:
2631:
2629:
2619:
2615:
2584:
2575:
2565:
2563:
2553:
2549:
2539:
2537:
2511:(6623): 657β8.
2497:
2493:
2486:
2464:
2457:
2450:
2432:
2415:
2372:
2365:
2333:
2327:
2323:
2313:
2311:
2301:
2294:
2287:
2273:
2258:
2248:
2246:
2219:
2213:
2176:
2161:
2157:
2147:
2145:
2107:
2098:
2066:
2060:
2051:
2041:
2039:
2035:
2028:
2022:
2007:
1997:
1995:
1985:
1981:
1971:
1969:
1959:
1955:
1945:
1943:
1930:
1926:
1916:
1914:
1904:
1900:
1890:
1888:
1876:
1872:
1862:
1860:
1856:
1855:
1851:
1819:
1813:
1804:
1797:
1775:
1771:
1761:
1759:
1749:
1745:
1730:
1704:
1700:
1693:
1669:
1665:
1655:
1653:
1643:
1639:
1634:
1592:
1583:
1554:
1526:71, noting that
1489:
1484:
1462:
1440:
1427:
1425:Right to remedy
1395:
1389:
1357:
1351:
1342:
1285:
1269:
1229:
1220:
1214:
1206:
1191:
1165:
1113:
1099:
1078:
1052:parole hearings
1016:
975:
934:
920:
918:Voting behavior
895:
890:
857:
836:
827:
804:
788:field of vision
755:
746:
733:
724:
715:
706:
683:
678:
633:
574:
572:Early critiques
569:
537:
490:Algorithms are
480:
465:automation bias
428:computer system
413:
384:
383:
374:
366:
365:
341:
331:
330:
302:Control problem
282:
272:
271:
183:
173:
172:
133:
125:
124:
95:Computer vision
70:
33:
17:
12:
11:
5:
7957:
7947:
7946:
7941:
7939:Discrimination
7936:
7931:
7926:
7921:
7907:
7906:
7900:
7885:
7879:
7862:
7859:
7856:
7855:
7830:
7804:
7792:The New Yorker
7778:
7752:
7726:
7698:
7685:whitehouse.gov
7671:
7642:
7615:
7574:
7547:(2): 398β404.
7521:
7494:
7479:
7443:
7417:
7395:
7357:
7336:(4): 685β703.
7316:
7310:978-0262044004
7309:
7291:
7250:
7224:
7206:Snow, Jackie.
7198:
7165:
7129:
7071:
7052:
7041:. July 3, 2018
7026:
6975:
6968:
6942:
6916:
6891:
6862:
6836:
6804:
6747:
6732:
6717:
6668:
6651:
6639:
6627:
6615:
6603:
6591:
6567:
6555:
6534:
6522:
6493:(9): 389β399.
6470:
6444:
6429:
6393:
6356:
6341:
6301:
6257:
6216:
6190:
6164:(5): 364β374.
6136:
6124:Marketing Land
6110:
6096:
6076:
6053:
6027:
6000:
5979:
5954:
5933:
5907:
5881:
5830:
5809:
5780:
5762:
5705:
5693:Slate Magazine
5679:
5654:
5611:
5586:
5559:(2): 362β366.
5539:
5513:
5488:
5463:
5430:
5373:
5338:
5304:
5277:
5251:
5237:
5203:
5185:Kafka, Peter.
5177:
5159:Kafka, Peter.
5151:
5125:
5099:
5073:
5058:
5018:
5005:(6): 797β815.
4985:
4959:
4930:
4904:
4875:
4862:(2): 371β407.
4840:
4824:
4777:
4748:
4718:
4687:
4674:10.3386/w25943
4652:
4626:
4587:(2): 339β344.
4567:
4518:
4497:
4479:
4453:
4423:
4380:
4337:
4306:
4299:
4278:
4272:978-1479837243
4271:
4253:
4222:
4211:. October 2019
4200:
4181:
4162:
4136:
4091:
4070:
4043:
4016:
3985:
3959:
3925:
3897:
3856:
3791:
3732:
3701:
3665:
3640:
3614:
3549:
3542:
3514:
3488:
3462:
3429:
3400:
3371:
3343:
3267:
3241:
3202:
3183:
3159:
3152:
3122:
3115:
3085:
3056:
3022:
2994:
2968:(5): 370β386.
2945:
2919:
2894:
2875:(4): 111β117.
2856:
2821:
2769:
2752:www.shirky.com
2746:Shirky, Clay.
2735:
2683:
2639:
2613:
2594:(6): 113β141.
2573:
2547:
2491:
2484:
2455:
2448:
2413:
2363:
2321:
2292:
2285:
2256:
2230:(3): 330β347.
2174:
2155:
2121:(2): 289β310.
2096:
2077:(5): 562β580.
2049:
2024:Seaver, Nick.
2005:
1979:
1953:
1924:
1898:
1870:
1849:
1802:
1795:
1769:
1743:
1728:
1698:
1691:
1663:
1636:
1635:
1633:
1630:
1629:
1628:
1623:
1618:
1613:
1608:
1603:
1598:
1591:
1588:
1582:
1579:
1553:
1550:
1497:European Union
1488:
1485:
1483:
1480:
1461:
1458:
1439:
1436:
1426:
1423:
1400:Explainable AI
1388:
1385:
1350:
1347:
1341:
1338:
1284:
1281:
1277:search engines
1268:
1265:
1228:
1225:
1216:Main article:
1213:
1210:
1205:
1202:
1190:
1187:
1164:
1161:
1112:
1109:
1098:
1095:
1077:
1074:
1067:14th Amendment
1015:
1012:
991:biometric data
974:
971:
933:
930:
919:
916:
894:
891:
889:
886:
856:
855:Feedback loops
853:
835:
832:
826:
823:
803:
800:
754:
751:
745:
744:Political bias
742:
732:
729:
723:
720:
714:
711:
705:
702:
682:
679:
677:
674:
632:
629:
573:
570:
568:
565:
536:
533:
479:
476:
415:
414:
412:
411:
404:
397:
389:
386:
385:
382:
381:
375:
372:
371:
368:
367:
364:
363:
358:
353:
348:
342:
337:
336:
333:
332:
329:
328:
323:
318:
313:
308:
299:
294:
289:
283:
278:
277:
274:
273:
270:
269:
264:
259:
254:
249:
248:
247:
237:
232:
227:
226:
225:
220:
215:
205:
200:
198:Earth sciences
195:
190:
188:Bioinformatics
184:
179:
178:
175:
174:
171:
170:
165:
160:
155:
150:
145:
140:
134:
131:
130:
127:
126:
123:
122:
117:
112:
107:
102:
97:
92:
87:
82:
77:
71:
66:
65:
62:
61:
51:
50:
44:
43:
15:
9:
6:
4:
3:
2:
7956:
7945:
7942:
7940:
7937:
7935:
7932:
7930:
7927:
7925:
7922:
7920:
7917:
7916:
7914:
7903:
7901:9781479837243
7897:
7893:
7892:
7886:
7882:
7880:9781484248843
7876:
7872:
7871:
7865:
7864:
7840:
7834:
7818:
7814:
7808:
7793:
7789:
7782:
7766:
7762:
7756:
7741:
7737:
7730:
7711:
7710:
7702:
7686:
7682:
7675:
7660:
7656:
7649:
7647:
7638:
7634:
7630:
7626:
7619:
7611:
7607:
7602:
7597:
7593:
7589:
7585:
7578:
7570:
7566:
7562:
7558:
7554:
7550:
7546:
7542:
7535:
7528:
7526:
7517:
7513:
7509:
7505:
7498:
7490:
7486:
7482:
7480:9781450384735
7476:
7472:
7468:
7463:
7458:
7454:
7447:
7432:
7431:Stanford News
7428:
7421:
7406:
7402:
7398:
7396:9781450384735
7392:
7388:
7384:
7379:
7374:
7370:
7369:
7361:
7353:
7349:
7344:
7339:
7335:
7331:
7327:
7320:
7312:
7306:
7303:. MIT Press.
7302:
7301:Data Feminism
7295:
7287:
7283:
7278:
7273:
7269:
7265:
7261:
7254:
7239:
7235:
7228:
7213:
7209:
7202:
7187:
7183:
7179:
7175:
7169:
7154:
7150:
7146:
7145:
7140:
7133:
7125:
7121:
7116:
7111:
7107:
7103:
7099:
7095:
7091:
7087:
7083:
7075:
7064:
7063:
7056:
7040:
7036:
7030:
7022:
7018:
7013:
7008:
7003:
6998:
6994:
6990:
6986:
6979:
6971:
6965:
6961:
6957:
6953:
6946:
6931:
6927:
6920:
6905:
6904:www.darpa.mil
6901:
6895:
6879:
6875:
6869:
6867:
6851:
6847:
6840:
6824:
6820:
6819:
6814:
6808:
6789:
6785:
6781:
6777:
6773:
6769:
6765:
6758:
6751:
6743:
6736:
6728:
6721:
6713:
6709:
6704:
6699:
6695:
6691:
6687:
6683:
6679:
6672:
6665:
6661:
6655:
6648:
6643:
6636:
6631:
6624:
6619:
6612:
6607:
6600:
6595:
6587:
6583:
6582:
6577:
6571:
6564:
6559:
6550:
6545:
6538:
6531:
6526:
6518:
6514:
6510:
6506:
6501:
6496:
6492:
6488:
6484:
6477:
6475:
6459:
6455:
6448:
6440:
6436:
6432:
6426:
6422:
6418:
6413:
6408:
6404:
6397:
6389:
6385:
6380:
6375:
6372:: 2630β2639.
6371:
6367:
6360:
6352:
6348:
6344:
6342:9781450361255
6338:
6334:
6330:
6325:
6320:
6316:
6312:
6305:
6297:
6293:
6289:
6285:
6281:
6277:
6273:
6269:
6261:
6253:
6249:
6244:
6239:
6235:
6231:
6227:
6220:
6205:
6201:
6194:
6179:
6175:
6171:
6167:
6163:
6159:
6152:
6145:
6143:
6141:
6125:
6121:
6114:
6099:
6093:
6089:
6088:
6080:
6072:
6068:
6064:
6057:
6042:
6038:
6031:
6023:
6016:
6009:
6007:
6005:
5995:
5990:
5983:
5974:
5969:
5965:
5958:
5949:
5944:
5937:
5922:
5918:
5911:
5896:
5892:
5885:
5877:
5873:
5868:
5863:
5858:
5853:
5849:
5845:
5841:
5834:
5825:
5820:
5813:
5805:
5799:
5791:
5787:
5783:
5781:9781479837243
5777:
5773:
5766:
5758:
5754:
5750:
5746:
5742:
5738:
5733:
5728:
5724:
5720:
5716:
5709:
5694:
5690:
5683:
5668:
5664:
5658:
5650:
5646:
5642:
5638:
5634:
5630:
5626:
5622:
5615:
5600:
5596:
5590:
5582:
5578:
5574:
5570:
5566:
5562:
5558:
5554:
5550:
5543:
5528:
5524:
5517:
5502:
5498:
5492:
5477:
5473:
5467:
5448:
5441:
5434:
5426:
5422:
5417:
5412:
5408:
5404:
5400:
5396:
5392:
5388:
5384:
5377:
5369:
5365:
5361:
5357:
5353:
5349:
5342:
5327:
5323:
5319:
5315:
5308:
5300:
5296:
5292:
5288:
5281:
5266:
5262:
5255:
5247:
5241:
5226:
5222:
5218:
5214:
5207:
5192:
5188:
5181:
5166:
5162:
5155:
5140:
5136:
5129:
5121:
5117:
5110:
5103:
5087:
5083:
5077:
5069:
5065:
5061:
5059:9781450371100
5055:
5051:
5047:
5042:
5037:
5033:
5029:
5028:"Saving Face"
5022:
5013:
5008:
5004:
5000:
4996:
4989:
4974:
4970:
4963:
4952:
4948:
4941:
4934:
4919:
4915:
4908:
4893:
4889:
4882:
4880:
4870:
4865:
4861:
4857:
4856:
4851:
4844:
4838:
4834:
4828:
4820:
4816:
4812:
4808:
4804:
4800:
4796:
4792:
4788:
4781:
4766:
4762:
4755:
4753:
4736:
4732:
4728:
4722:
4706:
4702:
4698:
4691:
4683:
4679:
4675:
4671:
4667:
4663:
4656:
4641:
4637:
4630:
4622:
4618:
4613:
4608:
4604:
4600:
4595:
4590:
4586:
4582:
4581:Biostatistics
4578:
4571:
4563:
4559:
4554:
4549:
4545:
4541:
4538:(4): 99β101.
4537:
4533:
4529:
4522:
4513:
4508:
4501:
4493:
4489:
4483:
4468:
4464:
4457:
4441:
4437:
4430:
4428:
4419:
4415:
4411:
4407:
4403:
4399:
4395:
4391:
4384:
4376:
4372:
4368:
4364:
4360:
4356:
4353:(1): 97β111.
4352:
4348:
4341:
4333:
4329:
4325:
4321:
4317:
4310:
4302:
4296:
4292:
4285:
4283:
4274:
4268:
4264:
4257:
4246:September 27,
4242:(2018): 77β91
4241:
4237:
4233:
4226:
4210:
4204:
4196:
4192:
4185:
4177:
4173:
4166:
4151:
4147:
4140:
4132:
4128:
4124:
4120:
4115:
4110:
4106:
4102:
4095:
4086:
4081:
4074:
4058:
4054:
4047:
4031:
4027:
4020:
4012:
4008:
4007:
3999:
3995:
3994:Noble, Safiya
3989:
3974:
3970:
3963:
3947:
3943:
3939:
3932:
3930:
3914:
3913:
3908:
3901:
3882:
3878:
3874:
3867:
3860:
3852:
3848:
3843:
3838:
3834:
3830:
3826:
3822:
3818:
3814:
3810:
3806:
3802:
3795:
3787:
3783:
3778:
3773:
3768:
3763:
3759:
3755:
3751:
3747:
3743:
3736:
3720:
3716:
3712:
3705:
3690:
3683:
3679:
3672:
3670:
3654:
3650:
3644:
3629:
3625:
3618:
3610:
3606:
3601:
3596:
3592:
3588:
3584:
3580:
3576:
3572:
3568:
3564:
3560:
3553:
3545:
3543:9789897583308
3539:
3534:
3529:
3525:
3518:
3503:
3499:
3492:
3477:
3473:
3466:
3457:
3452:
3448:
3444:
3440:
3433:
3418:
3414:
3407:
3405:
3389:
3385:
3378:
3376:
3360:
3359:
3354:
3347:
3339:
3333:
3318:
3314:
3310:
3306:
3302:
3298:
3294:
3290:
3286:
3282:
3278:
3271:
3256:
3252:
3245:
3236:
3231:
3226:
3221:
3217:
3213:
3206:
3198:
3194:
3187:
3179:
3174:
3170:
3163:
3155:
3149:
3145:
3141:
3137:
3133:
3126:
3118:
3112:
3108:
3104:
3100:
3096:
3089:
3081:
3076:
3072:
3065:
3063:
3061:
3045:
3041:
3037:
3033:
3026:
3011:
3007:
3001:
2999:
2983:
2979:
2975:
2971:
2967:
2963:
2956:
2949:
2934:
2933:The Intercept
2930:
2923:
2908:
2904:
2898:
2890:
2886:
2882:
2878:
2874:
2870:
2863:
2861:
2852:
2848:
2844:
2840:
2836:
2832:
2825:
2817:
2813:
2809:
2805:
2800:
2795:
2791:
2787:
2783:
2776:
2774:
2757:
2753:
2749:
2742:
2740:
2731:
2727:
2723:
2719:
2714:
2709:
2705:
2701:
2694:
2692:
2690:
2688:
2671:
2667:
2663:
2656:
2654:
2652:
2650:
2648:
2646:
2644:
2628:
2624:
2617:
2609:
2605:
2601:
2597:
2593:
2589:
2582:
2580:
2578:
2562:
2558:
2551:
2536:
2532:
2527:
2522:
2518:
2514:
2510:
2506:
2502:
2495:
2487:
2481:
2477:
2472:
2471:
2462:
2460:
2451:
2445:
2440:
2439:
2430:
2428:
2426:
2424:
2422:
2420:
2418:
2409:
2405:
2401:
2397:
2392:
2387:
2383:
2379:
2378:
2370:
2368:
2359:
2355:
2351:
2347:
2343:
2339:
2332:
2325:
2310:
2309:towcenter.org
2306:
2299:
2297:
2288:
2286:9780262525374
2282:
2278:
2271:
2269:
2267:
2265:
2263:
2261:
2245:
2241:
2237:
2233:
2229:
2225:
2218:
2211:
2209:
2207:
2205:
2203:
2201:
2199:
2197:
2195:
2193:
2191:
2189:
2187:
2185:
2183:
2181:
2179:
2170:
2166:
2159:
2144:
2140:
2136:
2132:
2128:
2124:
2120:
2116:
2112:
2105:
2103:
2101:
2092:
2088:
2084:
2080:
2076:
2072:
2065:
2058:
2056:
2054:
2034:
2027:
2020:
2018:
2016:
2014:
2012:
2010:
1994:
1990:
1983:
1968:
1964:
1957:
1942:
1938:
1934:
1933:Angwin, Julia
1928:
1913:
1909:
1902:
1887:
1886:
1881:
1874:
1859:
1853:
1845:
1841:
1837:
1833:
1829:
1825:
1818:
1811:
1809:
1807:
1798:
1792:
1788:
1783:
1782:
1773:
1758:
1754:
1747:
1739:
1735:
1731:
1725:
1721:
1717:
1713:
1709:
1702:
1694:
1688:
1684:
1680:
1676:
1675:
1667:
1652:
1648:
1641:
1637:
1627:
1624:
1622:
1619:
1617:
1614:
1612:
1609:
1607:
1604:
1602:
1599:
1597:
1594:
1593:
1587:
1578:
1575:
1571:
1570:New York City
1566:
1564:
1560:
1552:United States
1549:
1546:
1542:
1538:
1532:
1527:
1525:
1520:
1518:
1514:
1510:
1506:
1502:
1498:
1494:
1479:
1475:
1472:
1467:
1457:
1455:
1450:
1446:
1435:
1432:
1422:
1420:
1416:
1411:
1409:
1405:
1401:
1394:
1384:
1382:
1378:
1374:
1370:
1369:IEEE standard
1365:
1363:
1356:
1346:
1337:
1334:
1330:
1324:
1322:
1318:
1314:
1308:
1306:
1302:
1298:
1294:
1290:
1280:
1278:
1274:
1273:trade secrets
1264:
1262:
1258:
1253:
1248:
1244:
1242:
1238:
1234:
1224:
1219:
1209:
1201:
1198:
1197:
1189:Google Search
1186:
1182:
1179:
1175:
1174:medical model
1169:
1160:
1156:
1152:
1148:
1146:
1145:
1140:
1136:
1132:
1131:
1126:
1122:
1121:Android store
1118:
1108:
1105:
1094:
1092:
1086:
1083:
1073:
1071:
1068:
1063:
1059:
1057:
1053:
1049:
1045:
1040:
1037:
1033:
1029:
1025:
1021:
1011:
1007:
1004:
999:
995:
992:
988:
984:
979:
970:
967:
962:
960:
956:
950:
947:
942:
939:
929:
926:
915:
913:
908:
906:
901:
885:
883:
882:filter bubble
877:
875:
871:
866:
862:
861:feedback loop
852:
850:
846:
840:
831:
822:
819:
815:
813:
812:training data
808:
799:
797:
791:
789:
785:
780:
775:
773:
769:
759:
750:
741:
737:
728:
719:
713:Language bias
710:
701:
697:
695:
690:
688:
673:
671:
667:
662:
659:
653:
650:
645:
643:
639:
628:
625:
622:
617:
614:
609:
606:
601:
599:
595:
591:
590:
578:
564:
562:
558:
552:
550:
546:
542:
532:
530:
525:
521:
516:
513:Contemporary
511:
509:
505:
501:
497:
493:
484:
475:
473:
468:
466:
460:
458:
454:
449:
448:social biases
445:
441:
435:
433:
430:that create "
429:
425:
421:
410:
405:
403:
398:
396:
391:
390:
388:
387:
380:
377:
376:
370:
369:
362:
359:
357:
354:
352:
349:
347:
344:
343:
340:
335:
334:
327:
324:
322:
319:
317:
314:
312:
309:
307:
303:
300:
298:
295:
293:
290:
288:
285:
284:
281:
276:
275:
268:
265:
263:
260:
258:
255:
253:
250:
246:
245:Mental health
243:
242:
241:
238:
236:
233:
231:
228:
224:
221:
219:
216:
214:
211:
210:
209:
208:Generative AI
206:
204:
201:
199:
196:
194:
191:
189:
186:
185:
182:
177:
176:
169:
166:
164:
161:
159:
156:
154:
151:
149:
148:Deep learning
146:
144:
141:
139:
136:
135:
129:
128:
121:
118:
116:
113:
111:
108:
106:
103:
101:
98:
96:
93:
91:
88:
86:
83:
81:
78:
76:
73:
72:
69:
64:
63:
57:
53:
52:
49:
46:
45:
41:
40:
28:
23:
19:
7890:
7869:
7846:. Retrieved
7833:
7823:February 26,
7821:. Retrieved
7816:
7807:
7795:. Retrieved
7791:
7781:
7769:. Retrieved
7764:
7755:
7743:. Retrieved
7739:
7729:
7719:November 26,
7717:. Retrieved
7708:
7701:
7691:November 26,
7689:. Retrieved
7684:
7674:
7664:November 26,
7662:. Retrieved
7658:
7628:
7624:
7618:
7594:(2): 76β99.
7591:
7587:
7577:
7544:
7540:
7510:(1): 17β24.
7507:
7503:
7497:
7452:
7446:
7434:. Retrieved
7430:
7420:
7408:. Retrieved
7367:
7360:
7333:
7329:
7319:
7300:
7294:
7267:
7263:
7253:
7241:. Retrieved
7237:
7227:
7217:February 11,
7215:. Retrieved
7211:
7201:
7191:February 11,
7189:. Retrieved
7177:
7168:
7158:February 11,
7156:. Retrieved
7142:
7132:
7089:
7085:
7074:
7061:
7055:
7045:February 11,
7043:. Retrieved
7038:
7029:
6992:
6988:
6978:
6951:
6945:
6935:February 11,
6933:. Retrieved
6929:
6919:
6909:February 11,
6907:. Retrieved
6903:
6894:
6884:February 11,
6882:. Retrieved
6877:
6855:February 11,
6853:. Retrieved
6849:
6839:
6827:. Retrieved
6823:the original
6816:
6807:
6795:. Retrieved
6788:the original
6770:(2): 31β32.
6767:
6763:
6750:
6741:
6735:
6726:
6720:
6685:
6681:
6671:
6654:
6642:
6630:
6618:
6606:
6594:
6579:
6570:
6558:
6537:
6525:
6490:
6486:
6483:Vayena, Effy
6463:November 25,
6461:. Retrieved
6457:
6447:
6402:
6396:
6369:
6359:
6314:
6304:
6274:(2): 69β83.
6271:
6267:
6260:
6233:
6229:
6219:
6209:February 11,
6207:. Retrieved
6203:
6193:
6183:November 18,
6181:. Retrieved
6161:
6157:
6129:November 18,
6127:. Retrieved
6123:
6113:
6103:November 19,
6101:. Retrieved
6090:. Springer.
6086:
6079:
6062:
6056:
6046:November 19,
6044:. Retrieved
6041:The Atlantic
6040:
6030:
6021:
5982:
5964:Chen, Yiling
5957:
5936:
5924:. Retrieved
5920:
5910:
5898:. Retrieved
5894:
5884:
5867:11384/136444
5847:
5843:
5833:
5812:
5774:. New York.
5771:
5765:
5725:(6): 35β37.
5722:
5718:
5708:
5696:. Retrieved
5692:
5682:
5670:. Retrieved
5666:
5657:
5627:(125): 3:1.
5624:
5614:
5602:. Retrieved
5598:
5589:
5556:
5552:
5542:
5530:. Retrieved
5526:
5516:
5504:. Retrieved
5500:
5491:
5479:. Retrieved
5475:
5466:
5454:. Retrieved
5447:the original
5433:
5393:(1): 50β62.
5390:
5386:
5376:
5351:
5341:
5331:December 12,
5329:. Retrieved
5318:The Guardian
5317:
5307:
5290:
5280:
5270:December 12,
5268:. Retrieved
5264:
5254:
5240:
5230:February 26,
5228:. Retrieved
5216:
5206:
5196:November 22,
5194:. Retrieved
5190:
5180:
5170:November 22,
5168:. Retrieved
5164:
5154:
5144:November 18,
5142:. Retrieved
5139:The Atlantic
5138:
5128:
5119:
5115:
5102:
5090:. Retrieved
5085:
5076:
5031:
5021:
5002:
4998:
4988:
4978:February 19,
4976:. Retrieved
4972:
4962:
4946:
4933:
4923:November 20,
4921:. Retrieved
4917:
4907:
4897:November 20,
4895:. Retrieved
4891:
4859:
4853:
4843:
4827:
4794:
4790:
4780:
4770:November 18,
4768:. Retrieved
4764:
4739:. Retrieved
4730:
4721:
4709:. Retrieved
4700:
4690:
4665:
4655:
4643:. Retrieved
4639:
4629:
4584:
4580:
4570:
4535:
4531:
4521:
4500:
4491:
4482:
4472:November 18,
4470:. Retrieved
4466:
4456:
4446:November 18,
4444:. Retrieved
4439:
4396:(1): 15β34.
4393:
4389:
4383:
4350:
4346:
4340:
4323:
4319:
4309:
4290:
4262:
4256:
4244:. Retrieved
4239:
4235:
4225:
4213:. Retrieved
4203:
4194:
4184:
4175:
4165:
4153:. Retrieved
4150:The Register
4149:
4139:
4104:
4100:
4094:
4073:
4063:November 17,
4061:. Retrieved
4056:
4046:
4036:November 19,
4034:. Retrieved
4029:
4019:
4010:
4004:
3988:
3978:November 18,
3976:. Retrieved
3972:
3962:
3952:November 18,
3950:. Retrieved
3945:
3941:
3918:November 25,
3916:. Retrieved
3910:
3900:
3890:November 19,
3888:. Retrieved
3881:the original
3876:
3872:
3859:
3808:
3804:
3794:
3749:
3745:
3735:
3725:November 18,
3723:. Retrieved
3719:the original
3714:
3704:
3694:November 18,
3692:. Retrieved
3688:
3658:February 26,
3656:. Retrieved
3652:
3643:
3633:February 26,
3631:. Retrieved
3627:
3617:
3569:(1): 11909.
3566:
3562:
3552:
3523:
3517:
3507:November 25,
3505:. Retrieved
3501:
3491:
3481:November 25,
3479:. Retrieved
3475:
3465:
3449:(5): 14β19.
3446:
3443:Significance
3442:
3432:
3422:November 25,
3420:. Retrieved
3416:
3393:November 25,
3391:. Retrieved
3387:
3364:November 26,
3362:. Retrieved
3356:
3346:
3332:cite journal
3322:November 18,
3320:. Retrieved
3284:
3280:
3270:
3260:November 20,
3258:. Retrieved
3254:
3244:
3215:
3205:
3196:
3186:
3168:
3162:
3135:
3125:
3098:
3088:
3070:
3047:. Retrieved
3035:
3025:
3013:. Retrieved
3009:
2987:November 18,
2985:. Retrieved
2965:
2961:
2948:
2938:February 11,
2936:. Retrieved
2932:
2922:
2912:November 14,
2910:. Retrieved
2906:
2897:
2872:
2868:
2837:(3): 55β78.
2834:
2830:
2824:
2789:
2785:
2762:November 20,
2760:. Retrieved
2756:the original
2751:
2706:(1): 11β25.
2703:
2699:
2676:November 19,
2674:. Retrieved
2669:
2665:
2632:November 19,
2630:. Retrieved
2627:The Atlantic
2626:
2616:
2591:
2587:
2564:. Retrieved
2560:
2550:
2540:November 17,
2538:. Retrieved
2508:
2504:
2494:
2469:
2437:
2381:
2375:
2324:
2314:November 19,
2312:. Retrieved
2308:
2276:
2247:. Retrieved
2227:
2223:
2168:
2158:
2148:November 19,
2146:. Retrieved
2118:
2114:
2074:
2070:
2042:November 18,
2040:. Retrieved
2033:the original
1998:November 19,
1996:. Retrieved
1993:The Guardian
1992:
1982:
1972:November 19,
1970:. Retrieved
1966:
1956:
1946:November 19,
1944:. Retrieved
1940:
1927:
1917:November 19,
1915:. Retrieved
1911:
1901:
1891:November 19,
1889:. Retrieved
1883:
1873:
1863:November 19,
1861:. Retrieved
1852:
1830:(1): 14β29.
1827:
1823:
1780:
1772:
1762:November 20,
1760:. Retrieved
1756:
1746:
1711:
1701:
1673:
1666:
1654:. Retrieved
1650:
1640:
1584:
1567:
1555:
1534:
1529:
1521:
1505:member state
1495:(GDPR), the
1490:
1476:
1470:
1464:Integrating
1463:
1441:
1428:
1412:
1396:
1366:
1358:
1343:
1333:correlations
1329:ground truth
1325:
1309:
1286:
1270:
1249:
1245:
1237:Bruno Latour
1230:
1221:
1207:
1194:
1192:
1183:
1170:
1166:
1157:
1153:
1149:
1142:
1130:The Atlantic
1128:
1114:
1100:
1097:Surveillance
1087:
1079:
1064:
1060:
1041:
1017:
1008:
1000:
996:
980:
976:
963:
951:
943:
935:
921:
909:
896:
878:
858:
841:
837:
828:
825:Correlations
820:
816:
805:
792:
778:
776:
764:
747:
738:
734:
731:Stereotyping
725:
716:
707:
698:
691:
684:
681:Pre-existing
663:
654:
646:
634:
626:
618:
610:
602:
587:
584:
556:
553:
538:
528:
524:credit score
519:
512:
489:
469:
461:
436:
419:
418:
292:Chinese room
181:Applications
18:
6829:December 3,
6599:open source
5698:December 2,
5672:December 2,
5667:www.psu.edu
5604:December 2,
5532:December 2,
5506:December 2,
5481:December 2,
5456:December 2,
4645:October 28,
4442:. USA Today
4326:(3): 1β15.
4032:. USA Today
4013:(4): 37β41.
3948:(1): 93β128
3199:: 7596β7610
2792:(1): 3β16.
2377:AI Magazine
1449:Queer in AI
1445:Black in AI
1241:blackboxing
1030:becoming a
1024:U.S. courts
985:. In 2010,
955:hate groups
722:Gender bias
649:Clay Shirky
642:data points
549:hierarchies
478:Definitions
321:Turing test
297:Friendly AI
68:Major goals
34: 2001
7913:Categories
7740:ProPublica
7462:2105.01774
7378:2105.01774
7270:(2): 3β8.
7092:(4): 703.
6549:1610.02413
6500:1906.11668
6412:1707.01477
6379:1806.03281
6324:1811.11154
5973:1807.01134
5962:Hu, Lily;
5948:1609.07236
5824:2311.12435
5732:1908.08939
5191:AllThingsD
5165:AllThingsD
5092:August 24,
5041:2001.00964
4918:ProPublica
4892:ProPublica
4797:(4): 237.
4765:ProPublica
4701:ProPublica
4114:1809.02208
4085:1809.02208
3225:2305.08283
3178:2305.18189
3080:2303.16281
2391:1606.08813
1941:ProPublica
1632:References
1482:Regulation
1227:Complexity
1080:In 2017 a
1056:ProPublica
1036:ProPublica
1032:recidivist
966:Amazon.com
768:randomness
687:ideologies
658:Scott Lash
545:cataloging
500:Algorithms
326:Regulation
280:Philosophy
235:Healthcare
230:Government
132:Approaches
7848:April 29,
7631:: 18β84.
7610:2044-3994
7489:233740121
7405:233740121
7352:2210-5441
7286:2183-0088
7186:1059-1028
7153:0362-4331
7106:1572-8641
7021:1369-118X
6797:August 1,
6784:0278-0097
6517:201827642
6288:1387-3741
5994:1104.3913
5876:259678387
5798:cite book
5790:987591529
5757:201645229
5749:0001-0782
5649:211723415
5641:1558-2337
5581:252959399
5573:0968-7599
5407:1939-0025
5368:147322669
5326:0261-3077
5225:1059-1028
5068:209862419
4682:242410791
4603:1465-4644
4544:1205-9838
4512:1301.6822
4440:USA TODAY
4418:146588630
4410:0011-1287
4375:152043501
4367:0021-9347
4215:August 7,
4195:The Verge
4155:April 28,
4030:USA TODAY
3833:0028-0836
3591:2045-2322
3044:0099-9660
3010:Brookings
2889:151595343
2851:145639801
2816:148023125
2808:0162-2439
2708:CiteSeerX
2672:: 177β198
2608:145190381
2384:(3): 50.
2350:166742927
2249:March 10,
2244:207195759
2135:1530-7131
1738:235436386
1651:Espacenet
1626:SenseTime
1568:In 2017,
1501:profiling
1373:end users
1349:Technical
1340:Solutions
1321:cleartext
1257:A/B tests
1028:defendant
964:In 2015,
753:Technical
670:Microsoft
638:interacts
561:data sets
356:AI winter
257:Military
120:AI safety
7797:July 28,
7771:July 28,
7745:July 28,
7436:April 6,
7410:April 6,
7243:June 21,
7124:30930541
6712:30018439
6586:Archived
6351:58006233
6296:43293144
6178:16306443
5926:July 23,
5900:July 23,
5425:36265035
4951:Archived
4811:53611813
4741:June 19,
4735:Archived
4711:June 19,
4705:Archived
4621:31742353
4562:26566381
4131:52179151
3996:(2012).
3851:22972300
3786:26243876
3609:30093660
3317:23259274
3049:June 27,
3015:June 27,
2730:17355392
2566:July 31,
2143:55749077
2091:19119278
1844:13798875
1590:See also
1541:recitals
1315:such as
1299:and the
1082:Facebook
983:gorillas
938:LinkedIn
807:Emergent
802:Emergent
796:Turnitin
605:programs
596:pioneer
541:database
379:Glossary
373:Glossary
351:Progress
346:Timeline
306:Takeover
267:Projects
240:Industry
203:Finance
193:Deepfake
143:Symbolic
115:Robotics
90:Planning
7637:2972855
7569:3071679
7561:4797884
7115:6404626
6690:Bibcode
6439:2814848
6384:Bibcode
6252:3060763
5416:9951269
4837:2687339
4819:1677654
4612:7868043
4553:4631137
4176:Reuters
3842:3834737
3813:Bibcode
3777:4547273
3754:Bibcode
3653:Fortune
3600:6085300
3571:Bibcode
3309:2274783
3289:Bibcode
3281:Science
2982:5665107
2535:3128356
2526:2545288
2408:7373959
2358:1736283
1656:July 4,
1524:recital
1233:complex
1135:gay men
567:History
535:Methods
361:AI boom
339:History
262:Physics
7898:
7877:
7635:
7608:
7567:
7559:
7487:
7477:
7403:
7393:
7350:
7307:
7284:
7184:
7151:
7122:
7112:
7104:
7019:
6966:
6782:
6710:
6682:Nature
6581:Quartz
6515:
6437:
6427:
6349:
6339:
6294:
6286:
6250:
6204:Medium
6176:
6094:
5874:
5788:
5778:
5755:
5747:
5647:
5639:
5579:
5571:
5527:Forbes
5423:
5413:
5405:
5366:
5324:
5223:
5066:
5056:
4835:
4817:
4809:
4680:
4619:
4609:
4601:
4560:
4550:
4542:
4416:
4408:
4373:
4365:
4297:
4269:
4129:
3849:
3839:
3831:
3805:Nature
3784:
3774:
3628:Quartz
3607:
3597:
3589:
3540:
3315:
3307:
3150:
3113:
3042:
2980:
2887:
2849:
2814:
2806:
2728:
2710:
2606:
2533:
2523:
2482:
2446:
2406:
2356:
2348:
2283:
2242:
2141:
2133:
2089:
1842:
1793:
1736:
1726:
1689:
1487:Europe
1139:Amazon
1117:Grindr
1020:COMPAS
946:Target
912:Google
888:Impact
874:COMPAS
666:Google
529:biased
432:unfair
424:errors
311:Ethics
7842:(PDF)
7713:(PDF)
7557:S2CID
7537:(PDF)
7485:S2CID
7457:arXiv
7401:S2CID
7373:arXiv
7178:Wired
7066:(PDF)
6791:(PDF)
6760:(PDF)
6544:arXiv
6513:S2CID
6495:arXiv
6435:S2CID
6407:arXiv
6374:arXiv
6347:S2CID
6319:arXiv
6292:S2CID
6174:S2CID
6154:(PDF)
6018:(PDF)
5989:arXiv
5968:arXiv
5943:arXiv
5872:S2CID
5819:arXiv
5753:S2CID
5727:arXiv
5645:S2CID
5577:S2CID
5450:(PDF)
5443:(PDF)
5364:S2CID
5217:Wired
5112:(PDF)
5064:S2CID
5036:arXiv
4954:(PDF)
4943:(PDF)
4807:S2CID
4678:S2CID
4507:arXiv
4414:S2CID
4371:S2CID
4127:S2CID
4109:arXiv
4080:arXiv
4006:Bitch
4001:(PDF)
3884:(PDF)
3869:(PDF)
3685:(PDF)
3502:HRDAG
3313:S2CID
3220:arXiv
3173:arXiv
3075:arXiv
2978:S2CID
2958:(PDF)
2885:S2CID
2847:S2CID
2812:S2CID
2726:S2CID
2604:S2CID
2478:β20.
2404:S2CID
2386:arXiv
2346:S2CID
2334:(PDF)
2240:S2CID
2220:(PDF)
2139:S2CID
2087:S2CID
2067:(PDF)
2036:(PDF)
2029:(PDF)
1912:Wired
1840:S2CID
1820:(PDF)
1734:S2CID
1581:India
1404:DARPA
1176:to a
1003:Optum
987:Nikon
676:Types
581:data.
426:in a
223:Music
218:Audio
7944:Bias
7896:ISBN
7875:ISBN
7850:2022
7825:2019
7799:2018
7773:2018
7747:2018
7721:2017
7693:2017
7666:2017
7633:SSRN
7606:ISSN
7565:SSRN
7475:ISBN
7438:2023
7412:2023
7391:ISBN
7348:ISSN
7305:ISBN
7282:ISSN
7245:2021
7219:2020
7193:2020
7182:ISSN
7160:2020
7149:ISSN
7120:PMID
7102:ISSN
7047:2020
7017:ISSN
6964:ISBN
6937:2020
6911:2020
6886:2020
6857:2020
6831:2018
6818:IEEE
6799:2019
6780:ISSN
6708:PMID
6465:2017
6425:ISBN
6337:ISBN
6284:ISSN
6248:SSRN
6211:2020
6185:2017
6131:2017
6105:2017
6092:ISBN
6048:2017
5928:2024
5902:2024
5804:link
5786:OCLC
5776:ISBN
5745:ISSN
5700:2022
5674:2022
5637:ISSN
5606:2022
5569:ISSN
5534:2022
5508:2022
5483:2022
5458:2022
5421:PMID
5403:ISSN
5333:2019
5322:ISSN
5272:2019
5232:2019
5221:ISSN
5198:2017
5172:2017
5146:2017
5094:2023
5054:ISBN
4980:2020
4925:2017
4899:2017
4833:SSRN
4815:SSRN
4772:2017
4743:2020
4713:2020
4647:2019
4617:PMID
4599:ISSN
4558:PMID
4540:ISSN
4474:2017
4467:Time
4448:2017
4406:ISSN
4363:ISSN
4295:ISBN
4267:ISBN
4248:2020
4217:2020
4157:2022
4065:2017
4038:2017
3980:2017
3954:2017
3920:2017
3892:2017
3847:PMID
3829:ISSN
3782:PMID
3727:2017
3696:2017
3660:2019
3635:2019
3605:PMID
3587:ISSN
3538:ISBN
3509:2017
3483:2017
3424:2017
3395:2017
3366:2017
3338:link
3324:2017
3305:PMID
3262:2017
3148:ISBN
3111:ISBN
3051:2023
3040:ISSN
3017:2023
2989:2017
2940:2020
2914:2021
2804:ISSN
2764:2017
2678:2017
2634:2017
2568:2018
2542:2017
2531:PMID
2480:ISBN
2444:ISBN
2354:SSRN
2316:2017
2281:ISBN
2251:2019
2150:2017
2131:ISSN
2044:2017
2000:2017
1974:2017
1948:2017
1919:2017
1893:2017
1885:Time
1865:2017
1791:ISBN
1764:2017
1724:ISBN
1687:ISBN
1658:2018
1491:The
1447:and
1261:Bing
1104:CCTV
1050:and
959:STEM
668:and
506:and
496:data
442:and
7596:doi
7549:doi
7512:doi
7467:doi
7383:doi
7338:doi
7272:doi
7110:PMC
7094:doi
7007:hdl
6997:doi
6956:doi
6930:PwC
6772:doi
6698:doi
6686:559
6660:doi
6505:doi
6417:doi
6329:doi
6276:doi
6238:doi
6166:doi
5895:Vox
5862:hdl
5852:doi
5737:doi
5629:doi
5561:doi
5411:PMC
5395:doi
5356:doi
5295:doi
5291:OSF
5265:Vox
5046:doi
5007:doi
4973:Vox
4864:doi
4799:doi
4731:CNA
4670:doi
4607:PMC
4589:doi
4548:PMC
4398:doi
4355:doi
4328:doi
4119:doi
3877:127
3837:PMC
3821:doi
3809:489
3772:PMC
3762:doi
3750:112
3595:PMC
3579:doi
3528:doi
3476:Mic
3451:doi
3297:doi
3285:250
3230:doi
3140:doi
3103:doi
2970:doi
2877:doi
2839:doi
2794:doi
2718:doi
2596:doi
2521:PMC
2513:doi
2509:296
2396:doi
2338:doi
2232:doi
2123:doi
2079:doi
1832:doi
1716:doi
1679:doi
1046:in
213:Art
7915::
7815:.
7790:.
7763:.
7738:.
7683:.
7657:.
7645:^
7629:16
7627:.
7604:.
7590:.
7586:.
7563:.
7555:.
7545:34
7543:.
7539:.
7524:^
7508:17
7506:.
7483:.
7473:.
7465:.
7429:.
7399:.
7389:.
7381:.
7346:.
7334:33
7332:.
7328:.
7280:.
7268:11
7266:.
7262:.
7236:.
7210:.
7180:.
7176:.
7147:.
7141:.
7118:.
7108:.
7100:.
7090:28
7088:.
7084:.
7037:.
7015:.
7005:.
6993:22
6991:.
6987:.
6962:.
6928:.
6902:.
6876:.
6865:^
6848:.
6815:.
6778:.
6768:36
6766:.
6762:.
6706:.
6696:.
6684:.
6680:.
6578:.
6511:.
6503:.
6489:.
6473:^
6456:.
6433:.
6423:.
6415:.
6382:.
6368:.
6345:.
6335:.
6327:.
6313:.
6290:.
6282:.
6270:.
6246:.
6232:.
6228:.
6202:.
6172:.
6162:26
6160:.
6156:.
6139:^
6122:.
6069::
6065:.
6039:.
6020:.
6003:^
5919:.
5893:.
5870:.
5860:.
5848:37
5846:.
5842:.
5800:}}
5796:{{
5784:.
5751:.
5743:.
5735:.
5723:63
5721:.
5717:.
5691:.
5665:.
5643:.
5635:.
5623:.
5597:.
5575:.
5567:.
5557:38
5555:.
5551:.
5525:.
5499:.
5474:.
5419:.
5409:.
5401:.
5391:93
5389:.
5385:.
5362:.
5350:.
5320:.
5316:.
5293:.
5289:.
5263:.
5219:.
5215:.
5189:.
5163:.
5137:.
5120:81
5118:.
5114:.
5084:.
5062:.
5052:.
5044:.
5030:.
5003:26
5001:.
4997:.
4971:.
4945:.
4916:.
4890:.
4878:^
4860:40
4858:.
4852:.
4813:.
4805:.
4795:27
4793:.
4789:.
4763:.
4751:^
4729:.
4703:.
4699:.
4676:.
4664:.
4638:.
4615:.
4605:.
4597:.
4585:21
4583:.
4579:.
4556:.
4546:.
4536:51
4534:.
4530:.
4490:.
4465:.
4438:.
4426:^
4412:.
4404:.
4394:31
4392:.
4369:.
4361:.
4351:28
4349:.
4324:30
4322:.
4318:.
4281:^
4240:81
4238:.
4234:.
4193:.
4174:.
4148:.
4125:.
4117:.
4105:32
4103:.
4055:.
4028:.
4011:12
4009:.
4003:.
3971:.
3946:55
3944:.
3940:.
3928:^
3909:.
3875:.
3871:.
3845:.
3835:.
3827:.
3819:.
3807:.
3803:.
3780:.
3770:.
3760:.
3748:.
3744:.
3713:.
3687:.
3668:^
3651:.
3626:.
3603:.
3593:.
3585:.
3577:.
3565:.
3561:.
3536:.
3500:.
3474:.
3447:13
3445:.
3441:.
3415:.
3403:^
3386:.
3374:^
3355:.
3334:}}
3330:{{
3311:.
3303:.
3295:.
3283:.
3279:.
3253:.
3228:.
3214:.
3195:,
3171:,
3146:.
3134:.
3109:.
3097:.
3073:,
3059:^
3038:.
3034:.
3008:.
2997:^
2976:.
2966:29
2964:.
2960:.
2931:.
2905:.
2883:.
2873:33
2871:.
2859:^
2845:.
2835:24
2833:.
2810:.
2802:.
2790:41
2788:.
2784:.
2772:^
2750:.
2738:^
2724:.
2716:.
2702:.
2686:^
2668:.
2664:.
2642:^
2625:.
2602:.
2592:28
2590:.
2576:^
2559:.
2529:.
2519:.
2507:.
2503:.
2476:15
2458:^
2416:^
2402:.
2394:.
2382:38
2380:.
2366:^
2352:.
2344:.
2307:.
2295:^
2259:^
2238:.
2228:14
2226:.
2222:.
2177:^
2167:.
2137:.
2129:.
2119:16
2117:.
2113:.
2099:^
2085:.
2075:29
2069:.
2052:^
2008:^
1991:.
1965:.
1939:.
1910:.
1882:.
1838:.
1828:20
1826:.
1822:.
1805:^
1789:.
1755:.
1732:.
1722:.
1710:.
1685:.
1649:.
1323:.
1295:,
1147:.
1034:.
777:A
592:,
31:c.
29:,
7904:.
7883:.
7852:.
7827:.
7801:.
7775:.
7749:.
7723:.
7695:.
7668:.
7639:.
7612:.
7598::
7592:7
7571:.
7551::
7518:.
7514::
7491:.
7469::
7459::
7440:.
7385::
7375::
7354:.
7340::
7313:.
7288:.
7274::
7247:.
7221:.
7195:.
7162:.
7126:.
7096::
7049:.
7023:.
7009::
6999::
6972:.
6958::
6939:.
6913:.
6888:.
6859:.
6833:.
6801:.
6774::
6714:.
6700::
6692::
6666:.
6662::
6552:.
6546::
6519:.
6507::
6497::
6491:1
6467:.
6441:.
6419::
6409::
6390:.
6386::
6376::
6353:.
6331::
6321::
6298:.
6278::
6272:9
6254:.
6240::
6234:4
6213:.
6187:.
6168::
6133:.
6107:.
6073:.
6050:.
6024:.
5997:.
5991::
5976:.
5970::
5951:.
5945::
5930:.
5904:.
5878:.
5864::
5854::
5827:.
5821::
5806:)
5792:.
5759:.
5739::
5729::
5702:.
5676:.
5651:.
5631::
5608:.
5583:.
5563::
5536:.
5510:.
5485:.
5460:.
5427:.
5397::
5358::
5335:.
5301:.
5297::
5274:.
5234:.
5200:.
5174:.
5148:.
5096:.
5070:.
5048::
5038::
5015:.
5009::
4982:.
4927:.
4901:.
4872:.
4866::
4821:.
4801::
4774:.
4745:.
4715:.
4684:.
4672::
4649:.
4623:.
4591::
4564:.
4515:.
4509::
4494:.
4476:.
4450:.
4420:.
4400::
4377:.
4357::
4334:.
4330::
4303:.
4275:.
4250:.
4219:.
4197:.
4178:.
4159:.
4133:.
4121::
4111::
4088:.
4082::
4067:.
4040:.
3982:.
3956:.
3922:.
3894:.
3853:.
3823::
3815::
3788:.
3764::
3756::
3729:.
3698:.
3662:.
3637:.
3611:.
3581::
3573::
3567:8
3546:.
3530::
3511:.
3485:.
3459:.
3453::
3426:.
3397:.
3368:.
3340:)
3326:.
3299::
3291::
3264:.
3238:.
3232::
3222::
3175::
3156:.
3142::
3119:.
3105::
3077::
3053:.
3019:.
2991:.
2972::
2942:.
2916:.
2891:.
2879::
2853:.
2841::
2818:.
2796::
2766:.
2732:.
2720::
2704:9
2680:.
2670:2
2636:.
2610:.
2598::
2570:.
2544:.
2515::
2488:.
2452:.
2410:.
2398::
2388::
2360:.
2340::
2318:.
2289:.
2253:.
2234::
2171:.
2152:.
2125::
2093:.
2081::
2046:.
2002:.
1976:.
1950:.
1921:.
1895:.
1867:.
1846:.
1834::
1799:.
1787:5
1766:.
1740:.
1718::
1695:.
1681::
1660:.
408:e
401:t
394:v
304:/
Text is available under the Creative Commons Attribution-ShareAlike License. Additional terms may apply.