What the eye doesn’t see and the mind doesn’t know, doesn’t exist. D. H. Lawrence
But what if the coder working on artificial intelligence (AI) isn’t conscious of their own biases or how their code will sail off into areas of bias? Coders are ingrained in culture and belief that bias doesn’t exist in their algorithms, according to their logic.
Writing thousands of lines of code can be mind-numbing, and favor a type of incipient bias burnout. AI imperfection is what we’re seeing. The thing to keep in mind is that AI is dumb. We give it “intelligence” of a sort. We do that, but the programs also teach themselves without our governance.
If coders never stop to consider the subtle implications of what they are teaching machines to do, what to recognize, and where to make decisions, what happens? The fallibility of human beings is coming up against the mighty machines, and they may be smarter than us.
We have managed to create our own modern-day Frankenstein monster. Mary Shelley provided a cautionary tale of technology without the needed restraints. The beast, the little girl, and the pond were warnings.
The Four Challenges of Bias
“Bias in the machine learning model is about the model making predictions which tend to place certain privileged groups at the systematic advantage and certain unprivileged groups at the systematic disadvantage.”
AI and its inherent, hidden data biases, has already affected careers, interview possibilities, obtaining mortgages, and criminal cases. It has disrupted lives. What are the hidden biases found so far?
Four key areas where AI has shown bias are in the following:
1. Bias built into data — if the number of arrests is used to decide on sentencing recommendations, race plays a role, and that heavily influences these data sets.
Employment history is used to determine creditworthiness. Race, again, may have a significant impact here, too. In the area of employment, the role of sexism in corporate compensation is evident. Salary histories are requested, and gender plays a seminal role in salary determinations. The history is biased, and it is picked up as unbiased data for the algorithm.
2. AI-induced bias — Algorithms learn to change and their biases, which are initially in their data, are then further amplified into other decisions these programs make. The mix becomes large and sophisticated and may be more complicated than anything intended initially. The programs are making decisions to modify themselves on their own. There is no human input.
3. Teaching AI social rules — Algorithms trained in one context with one data set may be transferred and merged with a different data set. If there are no changes in “understanding” of the types of acceptable decisions, a problem may arise.
Factors such as gender, age, and, as mentioned, employment history can all be used in a new context. The integration of the sets and failure to analyze the result may not reveal the problems from the first data set.
4. Suspected cases of AI bias — Bias may not be easily parsed out from the programs in order to detect which underlying data sets may be flawed. The need for additional programs that can perform such an objective analysis is evident.
“While building models, product managers (business analysts) and data scientists do take steps to ensure that correct/generic data…have been used to build (train/test) the model, the unintentional exclusion of some of the important features or data sets could result in bias.”
Perhaps everyone in AI needs to remember that old computer credo: Garbage in, garbage out.
A Question of “Artificial” Ethics?
A new term, robot ethics or roboethics, deals with the benefits and the harm that may result from these AI machines. Once we have ethics in place for AI, do we also need to consider their “rights” to exist and perform their mission? The Institute for the Future is involved in these once inconceivable matters. One of their tasks is the design of an ethical operating system.
Dr. Jane McGonigal, Director of Game Research and Development at the Institute for the Future, said, “Silicon Valley risks falling into a long period of ‘Post-Traumatic Innovation’ in which our imagination is limited to solving the problems of the past instead of preventing the problems of the future.”
Another chestnut comes to mind, “Act in haste, regret at leisure,” would be an apt reference here to the AI puzzle.
AI’s Future
The industries identified as most probably the ones impacted by AI bias include banking, insurance, employment, housing, fraud, government, education, and finance. Shouldn’t science also be included? Selection for research protocols as well as clinical-trial analysis, may not be without bias. Permitting AI to select subjects and analyze research results may present research results problems.
All of the above are areas are where individuals may suffer personal, financial, or career damage loss because of the bias in the data. Incorrect identification can be one result. One man had a name similar to a terrorist; he was refused passage on an airplane. The AI identified him as a terrorist.
Individuals have been denied jobs, mortgages, paid higher insurance premiums and refused government benefits because of AI bias.
Even school admissions can be skewed in the direction of rejection rather than acceptance based on the models’ bias. The damage is inestimable.
Elon Musk believes that AI will need careful consideration to avoid an unintended scenario. In the video, Lo and Behold: Reveries of the Connected World, Musk provided an example. If a hedge fund manager indicated he wanted AI to optimize the return on a portfolio, AI could short specific funds, “go long defense stocks and start a war.” It would be absent any ethical constraints without our watchfulness.
The future is here, and it is not crystal clear. The outlined task is one we must be up to accomplishing.
Коментарі