Artificial Intelligence vs Human: discussing ethics and the digital "paradise"

Artificial Intelligence vs Human — Marianna Konina

Risks and Pitfalls: why artificial intelligence is not a universal good?


Artificial intelligence is not about a chat toy that writes entrance exams or answers questions. It's not just about delegating routine tasks or even about safety. I am writing this column after everyone eagerly asked ChatGPT what it knows about them. At first, I was surprised by such requests, as one could inquire about the history of an architectural monument, cultural differences between nations, or simply interesting facts. But then I remembered that intimate details about the Kardashian family's life excite the masses far more than the achievements of last year's Nobel laureates.

So we have a common phenomenon that doesn't hinder humanity's evolution. Women's rights, the rights of people of different races, beliefs, and personal preferences — these are all healthy achievements. On the other hand, the unlimited capabilities of artificial intelligence (AI) can cause harm. What kind of harm exactly? Let's delve into it.

Gender bias from Amazon

In 2018, Reuters agency published news that Amazon's recruiting program, which operates on AI, rejected all female candidates. The project started back in 2014 with the goal of creating a tool that would rank candidates like hotels — from one star to five. The system taught itself to prefer male candidates. Therefore, it rejected CVs that contained the word "women's" — for example, "captain of a women's chess club" — and lowered the rating of those who studied in "women's" colleges.

Gender bias wasn't the only problem. Sometimes the system proposed unqualified candidates for all kinds of positions. Amazon shut down this project, but the daily news agenda imposes the idea of AI's reliability on us. Unfortunately, it can make serious mistakes.

The system learns from a vast amount of historical information. And that information includes limited rights for women and representatives of different races, as well as nations facing intolerance and even extermination. If humans do not correct these nuances, we face a regression of civilization. But who sets the tasks for the machine? What criteria are applied? And why can these criteria be considered fair?

Exclusive access to transportation

Since 2014, China has been experimenting with a social credit rating program. By utilizing a vast amount of personal data and facial recognition systems, the Chinese government can punish people for bad behavior and encourage "correct" actions.

The most highlighted experiments in the media may seem noble. Individuals with unpaid bills or parking fines, engaging in socially unacceptable behavior, will experience slower internet speeds, be restricted from purchasing airline tickets, and have limited access to public transportation.

The list of undesirable behaviors includes playing loud music in public places, smoking in undesignated areas, purchasing an excessive number of video games (who determines what is "excessive" is a separate question), and posting fake news, particularly related to terrorism and airport security. It also encompasses eating on public transportation and violating traffic rules.

Furthermore, some students who refused military service were denied access to higher education due to the monitoring system, just like children of parents with low social credit ratings. One student was even expelled from his studies because his father did not pay his debts on time.

Obedient citizens also receive their bonuses. For example, discounts on utilities, car rentals, or hotel bookings without a deposit, and more favorable interest rates from banks. Unexpectedly, promotion of their profiles on dating websites.

Such a system scares me because expelling a student from a university based on suspicions about his parent's affairs is absurd. How exactly the algorithm works, what criteria it uses, and how it evaluates is an undisclosed information. The next step could be punishing individuals for frivolous purchases, "wrong" social media posts, and who knows what else. And the worst part is that it's determined by a program that cannot foresee all circumstances and lacks intuition and empathy.

The court with a human face

Perhaps due to blind trust in algorithms, the authorities in one district of Houston suffered a defeat in court against school teachers. The officials' mistake cost the budget $237,000. The desire to automate the teaching evaluation system led the experimenters to an AI-based program, of course, with a secretive algorithm. For four years, the system determined which teachers should be rewarded and dismissed. People did not know how their scores were calculated, and they had no right or opportunity to challenge their dismissals or unfair evaluations.

Among those deemed ineffective were teachers praised by parents, loved by students, and recognized by professional communities. Eventually, the teachers formed an advocacy group and found a way to file complaints against local officials due to inadequate decisions. The judge accepted the arguments of the affected teachers and ordered compensation to be paid.

Or another media case where the judge acted justly, not according to the rules. A 96-year-old man brought his son, who had cancer, to the hospital for procedures. Unaware of some signs, he exceeded the speed limit in a school zone. He was fined for this violation because the fact of the offense was there, and an AI would have done the same. However, during the hearing, the judge, taking into account the circumstances, dismissed the fine.

That's how humans are capable of looking deeper. The final word should be left to humans. It's impossible to account for all possibilities because humans have free will to want or not want, to do or not do. It's their right as well. Of course, the decision to adopt one system or another remains with humans. However, the experience of the recent pandemic, which resembled more of a social experiment, shows that people can be offered a choice without actually having a choice.

Legislative restrictions on AI education

The wave of enthusiasm for the future that has arrived today has already subsided, yet naive assumptions about optimizing costs for communicators still occur. "The model," I read, "can be applied to generate content for social media, enabling businesses to quickly and efficiently create high-quality content."

We need to train ourselves to distinguish between human and robotic text. It sounds funny because the imperfection and certain artificiality of the system's writing are apparent. However, let's not forget the passion with which the mass audience consumes the most obvious fakes and eagerly falls for the most primitive manipulations. It's futile to talk about magical thinking and shifting responsibility onto someone else.

We need laws to ensure that the system doesn't turn emotional masses into weapons of mass destruction for the sake of someone's experiments or gains. At the very least, we need to establish rules for training programs, motivational aspects, and ethical considerations — so that we don't find ourselves dealing with another Hitler or racial discrimination. We improve the tool for what? What tasks do we plan to solve with the help of this immensely powerful machine? And again, the question arises: how will the general public be able to interact with a system that is tens of times more intelligent in an environmentally sustainable manner?

That's precisely what the letter from the Future of Life Institute, signed by top AI researchers and Elon Musk, is about. "Are we ready," the text says, "to allow machines to fill information channels with propaganda and lies? Should we risk losing civilizational control? Such decisions cannot be delegated to leaders we did not choose. Powerful AI systems should only be developed when we are certain that their impact will be positive and the risks will be manageable."

It's a sound and valid argument. In a world where people don't read agreements, the proliferation of powerful machines is a real danger. On the other hand, intelligent algorithmic systems can help humans, at the very least, by bringing order to their minds.

I resonate with a joke from the design community: "To use AI, the client needs to clearly formulate the request. So, designers will have plenty of work for many, many years to come." After all, even the hype-driven "toy" ChatGPT is interesting as an alternative information search engine. It can indeed perform simple tasks. But you need to clearly articulate what exactly you want to get as a result.

People can be emotional, inattentive, superficial, and tired. It's natural, so it shouldn't be used against them. That's why it's so important for algorithms that make life easier to be transparent and open to scrutiny and discussion. Currently, this is not the case. Therefore, it is entirely justified to establish a legislative framework for such experiments and work on developing human intelligence, not just artificial one.

Source: OBOZREVATEL

ニュース購読を登録する
最新ニュース
Produced by WePlay Studios, the event was chosen as the best one by fans worldwide in the AI, Metaverse & Virtual — Entertainment, Sports & Music category.
02.05.2024
WePlay Studios and Grammy Award-winning producer Larrance "Rance" Dopson are joining forces to create more content focusing on cultural themes.
05.03.2024
2020‐2023 年度の企業社会的責任レポート
01.02.2024