Risks and Pitfalls: why artificial intelligence is not a universal good?
Artificial intelligence is not about a chat toy that writes entrance exams or answers questions. It's not just about delegating routine tasks or even about safety. I am writing this column after everyone eagerly asked ChatGPT what it knows about them. At first, I was surprised by such requests, as one could inquire about the history of an architectural monument, cultural differences between nations, or simply interesting facts. But then I remembered that intimate details about the Kardashian family's life excite the masses far more than the achievements of last year's Nobel laureates.
So we have a common phenomenon that doesn't hinder humanity's evolution. Women's rights, the rights of people of different races, beliefs, and personal preferences — these are all healthy achievements. On the other hand, the unlimited capabilities of artificial intelligence (AI) can cause harm. What kind of harm exactly? Let's delve into it.
Gender bias from Amazon
In 2018, Reuters agency published
Gender bias wasn't the only problem. Sometimes the system proposed unqualified candidates for all kinds of positions. Amazon shut down this project, but the daily news agenda imposes the idea of AI's reliability on us. Unfortunately, it can make serious mistakes.
The system learns from a vast amount of historical information. And that information includes limited rights for women and representatives of different races, as well as nations facing intolerance and even extermination. If humans do not correct these nuances, we face a regression of civilization. But who sets the tasks for the machine? What criteria are applied? And why can these criteria be considered fair?
Exclusive access to transportation
Since 2014, China has been experimenting with a social credit rating program. By utilizing a vast amount of personal data and facial recognition systems, the Chinese government can punish people for bad behavior and encourage "correct" actions.
The most highlighted experiments in the media may seem noble. Individuals with unpaid bills or parking fines, engaging in socially unacceptable behavior, will
The list of undesirable behaviors includes playing loud music in public places, smoking in undesignated areas, purchasing an excessive number of video games (who determines what is "excessive" is a separate question), and posting fake news, particularly related to terrorism and airport security. It also encompasses eating on public transportation and violating traffic rules.
Furthermore, some
Obedient citizens also receive their
Such a system scares me because expelling a student from a university based on suspicions about his parent's affairs is absurd. How exactly the algorithm works, what criteria it uses, and how it evaluates is an undisclosed information. The next step could be punishing individuals for frivolous purchases, "wrong" social media posts, and who knows what else. And the worst part is that it's determined by a program that cannot foresee all circumstances and lacks intuition and empathy.
The court with a human face
Perhaps due to blind trust in algorithms, the authorities in one district of Houston suffered a defeat in court against school teachers. The officials' mistake cost the budget $237,000. The desire to automate the teaching evaluation system led the experimenters to an AI-based program, of course, with a secretive algorithm. For four years, the system
Among those deemed ineffective were teachers praised by parents, loved by students, and recognized by professional communities. Eventually, the teachers formed an advocacy group and found a way to file complaints against local officials due to inadequate decisions. The judge accepted the arguments of the affected teachers and ordered compensation to be paid.
Or another media case where the judge acted justly, not according to the rules. A 96-year-old man
That's how humans are capable of looking deeper. The final word should be left to humans. It's impossible to account for all possibilities because humans have free will to want or not want, to do or not do. It's their right as well. Of course, the decision to adopt one system or another remains with humans. However, the experience of the recent pandemic, which resembled more of a social experiment, shows that people can be offered a choice without actually having a choice.
Legislative restrictions on AI education
The wave of enthusiasm for the future that has arrived today has already subsided, yet naive assumptions about optimizing costs for communicators still occur. "The model," I
We need to train ourselves to distinguish between human and robotic text. It sounds funny because the imperfection and certain artificiality of the system's writing are apparent. However, let's not forget the passion with which the mass audience consumes the most obvious fakes and eagerly falls for the most primitive manipulations. It's futile to talk about magical thinking and shifting responsibility onto someone else.
We need laws to ensure that the system doesn't turn emotional masses into weapons of mass destruction for the sake of someone's experiments or gains. At the very least, we need to establish rules for training programs, motivational aspects, and ethical considerations — so that we don't find ourselves dealing with another Hitler or racial discrimination. We improve the tool for what? What tasks do we plan to solve with the help of this immensely powerful machine? And again, the question arises: how will the general public be able to interact with a system that is tens of times more intelligent in an environmentally sustainable manner?
That's precisely what the
It's a sound and valid argument. In a world where people don't read agreements, the proliferation of powerful machines is a real danger. On the other hand, intelligent algorithmic systems can help humans, at the very least, by bringing order to their minds.
I resonate with a joke from the design community: "To use AI, the client needs to clearly formulate the request. So, designers will have plenty of work for many, many years to come." After all, even the hype-driven "toy" ChatGPT is interesting as an alternative information search engine. It can indeed perform simple tasks. But you need to clearly articulate what exactly you want to get as a result.
People can be emotional, inattentive, superficial, and tired. It's natural, so it shouldn't be used against them. That's why it's so important for algorithms that make life easier to be transparent and open to scrutiny and discussion. Currently, this is not the case. Therefore, it is entirely justified to establish a legislative framework for such experiments and work on developing human intelligence, not just artificial one.
Source: