1Volume 12- Issue 72
/ December 2023
17
http:// www.amazoniainvestiga.info ISSN 2322- 6307
DOI: https://doi.org/10.34069/AI/2023.72.12.2
How to Cite:
Stovpets, O., Borinshtein, Y., Yershova-Babenko, I., Kozobrodova, D., Madi, H., & Honcharova, O. (2023). Digital technologies
and human rights: challenges and opportunities. Amazonia Investiga, 12(72), 17-30. https://doi.org/10.34069/AI/2023.72.12.2
Digital technologies and human rights: challenges and opportunities
Tecnoloas digitales y derechos humanos: retos y oportunidades
Received: November 8, 2023 Accepted: December 29, 2023
Written by:
Oleksandr Stovpets1
https://orcid.org/0000-0001-8001-4223
Yevhen Borinshtein2
https://orcid.org/0000-0002-0323-4457
Irina Yershova-Babenko3
https://orcid.org/0000-0002-2365-5080
Dina Kozobrodova4
https://orcid.org/0000-0001-8882-2364
Halyna Madi5
https://orcid.org/0000-0003-4817-4635
Olha Honcharova6
https://orcid.org/0000-0003-1025-375X
Abstract
Digitalization has revolutionized modern life, but
it also presents complex challenges to human
rights. The objective of this study is to consider
digital technologies phenomenon in its
complexity. In our research we mainly relied on
dialectical method, systematic approach,
comparative-historical method, axiological
approach. Our research highlights both a range of
benefits and a variety of risks associated with the
deployment of digital technologies into life. Key
concerns include data safety, human
consciousness manipulations, cyber-security
threats, the 'digital divide', algorithmic biases,
and authoritarian technology misuse. But despite
these challenges, digitalization offers
opportunities for human rights advancement. We
can also envision comprehensive social inclusion
1
Doctor Hab. in Philosophical Sciences, Professor of the Social & Humanitarian Studies department, Odessa National Maritime
University, Ukraine. WoS Researcher ID: AAK-5150-2020
2
Doctor Hab. in Philosophical Sciences, professor, Head of the department of Philosophy, Sociology and Management of
sociocultural activities, The state institution “South Ukrainian National Pedagogical University named after K.D. Ushynsky”, Ukraine.
WoS Researcher ID: HTR-3070-2023
3
Doctor Hab. in Philosophical Sciences, Professor of the Philosophy department, M.P. Drahomanov National Pedagogical University,
Ukraine. WoS Researcher ID: HJG-3558-2022
4
PhD in Philosophical Sciences, doctoral student of the department of Philosophy, M.P. Drahomanov National Pedagogical
University, Ukraine. WoS Researcher ID: HJC-8000-2022
5
PhD in Philosophical Sciences, assistant professor of the Social & Humanitarian Studies department, Odessa National Maritime
University, Ukraine. WoS Researcher ID: HFE-7865-2022
6
PhD in Technical Sciences, associate professor, doctoral student of the department of Philosophy, Sociology and Management of
sociocultural activities, The state institution “South Ukrainian National Pedagogical University named after K.D. Ushynsky”, Ukraine.
WoS Researcher ID: B-5561-2019
18
www.amazoniainvestiga.info ISSN 2322- 6307
in cyberspace, digital literacy promotion, further
technological innovation, and robust ethical and
legal frameworks safeguarding digital rights.
Mindful AI deployment can enhance living
standards, improve education and healthcare, and
even extend longevity. Contemporary political
systems must comprehend and regulate digital
technology's power, ensuring responsible
governance, as without safety protocols and
reasonable limitations for AI-powered tools and
technologies, human rights and freedoms remain
at risk.
Keywords: digital technologies deployment,
algorithms, neural networks, ethics, laws, rights,
technology misuse.
Introduction
When we start talking about human life,
individual rights and information rights, today
we can no longer talk about them abstractedly,
outside the context of the development of the
latest digital technologies, including the artificial
intelligence. Such a technology as an Artificial
Intelligence (algorithms, chat-bots, neural
networks) is becoming more powerful not every
year, but every month. This rapid digitalization
and the implementation of comprehensive cyber-
technologies have a direct connection with
human rights discourse.
The actuality of this research is determined by
the ambivalence of technology, especially those
in the digital field. Studying the latest digital
technologies from the standpoints of social
philosophy is important for several reasons,
particularly, it has broader implications for
society and its ethical, moral, and philosophical
foundations. As philosophy often addresses
ethical questions related to the impact of
technology on individuals and society,
researching the interaction of human beings with
digital technologies allows us to assess their
ethical implications, such as privacy concerns,
surveillance, data ownership, and the responsible
use of AI (artificial intelligence). Social
philosophy often debates the idea of
technological determinism, which posits that
technology drives societal change. Researching
digital technologies can contribute to this
discourse by examining the extent to which
technology shapes our values, culture, and social
structures.
The object of this study is the phenomenon of
digital technologies, taken in its complexity,
variability, and multidimensional nature.
The purpose of this study is to analyze the range
of possible advantages, hazards, and challenges
of introducing the latest digital technologies into
modern life, taking into account the contradictory
nature of the innovative process.
The importance of given research, as we hope, is
connected with the ability to contribute into
better understanding of how the latest digital
technologies may impact on society, including
issues like social inequality, unemployment,
cyber-gap, digital economy divides, democratic
values and processes, public opinion
manipulations, transparency problems,
consumerism and new ecological thinking, and
the potential for social improvements. It allows
us to consider how technology influences power
dynamics, social structures, and human
relationships. We believe that researching digital
technologies helps us examine how these
technologies can empower or disempower
individuals, affecting their choices and actions.
Literature review
The current knowledge of the topic is presented
in some publications and contemporary social
discussions. Within the array of publications
pertinent to gaining a comprehensive grasp of the
studied subject, it's important to acknowledge the
works that center their focus on some related
Stovpets, O., Borinshtein, Y., Yershova-Babenko, I., Kozobrodova, D., Madi, H., Honcharova, O. / Volume 12 - Issue 72:
17-30 / December, 2023
1Volume 12- Issue 72
/ December 2023
19
http:// www.amazoniainvestiga.info ISSN 2322- 6307
issues, like: the Machine Learning, which
underlies computational systems that are
biologically inspired, statistically driven, agent-
based networked entities that program
themselves (Audry & Bengio, 2021); Deep
Learning foundations and concepts (Bishop &
Bishop, (s.f)); Deep-Learning architectures and
methods (Goodfellow et al., 2017); issues of
image processing and synthesis using Deep
learning (Ganin et al., 2019); Deep neural
networks for natural language processing (Lin &
Bengio, 2019); Organizational decision-making
structures in the age of AI (Shrestha et al., 2019);
Human-centered Artificial Intelligence
(Shneiderman, 2020); measuring
neurophysiologic responses when people choose
to trust algorithms (Alexander et al., 2018); the
link between information processing capability
and decision-making effectiveness (Cao et al.,
2019); the intersection of AI, decision-making
and educational leadership (Wang, 2021); use of
Big data and AI-embedded systems in some
industries (Plantec et al., 2023 ; Svyrydenko &
Stovpets, 2020); the role of Trust during Human-
AI collaboration in managerial decision-making
processes (Tuncer & Ramirez, 2022); some
opinions from the AI architects, on what may be
the path toward human-level machine
intelligence (Ford, 2018); generative pre-trained
neural networks and AI-human collaboration
issues (Fui-Hoon Nah et al., 2023).
Among the aforementioned studies, there are a
number of works between 2019-2023 that we
consider impactful, as they raised a number of
issues that were further enhanced in our study.
In particular, the study made by G. Cao,
Y. Duan, & T. Cadden investigates IT-enabled
capabilities and the relationship between
competitive advantage and the key concept of
value, rarity, inimitability and non-
substitutability of information processing in
business realm. They use data collected from 633
UK companies, and their study shows that there
is a positive relationship between the value, rarity
and inimitability characteristics of information
processing, and competitive advantage, which is
partially mediated by decision-making
effectiveness. Another study, made by F. Fui-
Hoon Nah, R. Zheng, J. Cai, K. Siau, & L. Chen,
aimed to make some categorization for
generative AI challenges, which can be attributed
to ethics, technology, regulations and policy, and
economy. As authors claim, many of these
challenges arise due to the lack of HCAI
(Human-centered AI); to be successful,
generative AI needs to be human-centered by
taking into account empathy and human needs,
transparency and explainability, ethics and
governance, and transformation through AI
literacy. In the recent book, written by S. Audry,
& Y. Bengio, there are insightful questions and
discussions on the progress of ML-based AI; this
work may appear interesting for engineers and
computer scientists who examine the deep
potential of ML (machine learning) in the realm
of different arts. In this book, Y. Bengio makes
an assumption that human artists and the ML
tools may be able, in their synergy, to enter those
'mental territories' where none alone can easily
go further.
In addition to those sources we address in our
literature review, in this research we try to raise
some unexplored questions. Among them, we
emphasize on how the 'digital divide' may
exacerbate socioeconomic disparities, and how
the 'emotionally calibrated' neural networks
wield the potential to be weaponized by non-
democratic regimes to manipulate public
opinion, adding another dimension to the threat
landscape. In some extent, we develop the
consideration of the 'algorithmic biases' problem
that may lead to discrimination in employment,
education, housing, and service access (this
discourse was seriously elevated by T. Baer in
2019, and we enriched it with some fresh
examples).
Methodology
of the research is based on a dialectical method,
systematic approach, comparative-historical
method, axiological approach. Applied in a
comprehensive manner, they contribute into
mental modelling of different future scenarios,
which have the potential for fulfillment,
depending on the combination of certain
technological and social factors.
Applying the dialectical method to the study of
digital technologies can offer valuable insights
and contribute to a deeper understanding of this
rapidly evolving phenomenon. Using this
method, we identify the following dialectical
contradictions within digital space:
privacy vs. publicity (the tension between
privacy concerns and increased public
involvement becomes a significant
dialectical aspect in the digital realm);
inclusion vs. exclusion (digital technologies
can both include and exclude individuals or
groups; the dialectical method helps in
examining the contradictory nature of
inclusivity and exclusivity, shedding light on
20
www.amazoniainvestiga.info ISSN 2322- 6307
how technologies can simultaneously
empower some while marginalizing others);
centralization vs. decentralization (digital
technologies, like blockchain, introduce the
growing conflict between centralization and
decentralization; the dialectical method can
aid in examining how these opposing forces
interact, what trends arise, and how they are
manifested in various digital contexts);
order vs. chaos;
freedom vs. total control;
development vs. disruption (while digital
technologies contribute to progress and
innovation, they may also disrupt traditional
industries and job markets; the dialectical
method helps to identify and analyze these
contradictions, allowing to understand how
they should be resolved).
A systematic approach in studying the
phenomenon of digital technologies contributes
to this research by providing a structured and
organized framework for exploring such
concepts as cybersecurity, communication,
creativity, AI social impact and its possible
economic implications. A systematic approach
ensures encompassing of various aspects of
digital technologies, including technological
specifications, user behaviors, market trends,
regulatory frameworks, and social consequences.
A systematic approach also encourages
interdisciplinary exploration, ensuring that
research considers not only technological
features, but a related issues from sociology,
anthropology, economics, ethics, and law.
The use of comparative-historical method
emphasizes historical context in respect of digital
technologies' impact to human rights, and studies
their evolutionary path. Applying this to the
study of digital technologies involves tracing the
historical development of technologies,
understanding the crucial contradictions at
different stages of civilized history, and
examining how they've shaped societal structures
and norms.
An axiological approach used in this research
makes it possible to look at the specific values
embedded in, and associated with digital
technologies. Axiology focuses on ethical values
and principles. In the context of digital
technologies and human rights, this approach
helps to evaluate the ethical dimension of 'digital
revolution'. It addresses questions related to
privacy, security, transparency, and the
responsible use of AI. In some extent, digital
technologies reflect (and even can shape) cultural
and societal values. Axiological approach allows
us to explore how digital tools and platforms
align with or challenge prevailing cultural norms.
For example, social media platforms may
influence communication styles and societal
expectations, and an axiology helps to assess
these impacts.
Results and discussion
We want to frame our research moving forward
around Artificial Intelligence positive aspects,
and AI as a threat right now. As many
futurologists fairly say, there are two essential
things to know about AI. Firstly, it is the
pioneering technology in history that can make
decisions by itself. Secondly, it is also the first
technology in history that can generate ideas by
itself. Some developers try to calm us down by
comparing it to previous technologies, where
initial concerns faded away over time. However,
AI is unlike anything we've seen before in
history. Whether it was a stone axe or an atomic
bomb, but all previous tools empowered humans,
because it was humans who had to decide how to
use them (Bigman & Gray, 2018). But AI can
make decisions by itself, so it potentially can take
power away from the humankind.
Additionally, previous information technology
could only reproduce or spread human ideas,
such as the printing press, which could print the
Bible, but not write the Bible, as well as it
couldn't provide a commentary on it. In contrast,
systems like GPT can create entirely new
commentaries on the Bible or any other topic. In
the future, potentially, they might even create
new holy texts for future religions. The irony is
that Humans have always fantasized about
receiving holy scriptures from a superhuman
intelligence, and now it is becoming possible (not
from God above, but from neural network).
While there are many positive applications for
this kind of power, there are also many negative
ones (Xu et al., 2022 ; Fui-Hoon Nah et al.,
2023). It's fundamentally different from anything
we've encountered before (Agrawal et al., 2019).
We shall try to draw the attention on how the
newest information technologies may show up in
different life spaces.
The first example is the election process. The
tools derived from the large language models can
be used for propaganda, misinformation, and
personalized trolls that could manipulate voters'
decisions. Presently one may use, for example,
such generative neural nets as "Midjourney", or
"Craiyon", or Google service named "Dream
A.I" - any of such instruments might be applied
1Volume 12- Issue 72
/ December 2023
21
http:// www.amazoniainvestiga.info ISSN 2322- 6307
for creating fake images, aimed simply at
discrediting your political rivals, and to deceive
voters. That is an obvious hazard for the
democracy.
The second threat, which may arise in a few years
from now, is if we overcome the lag between the
current level of technological development in AI,
and human intelligence. If we build machines
that are at least as intelligent as us, they would
have inherent advantages due to their access to
vast amounts of data, and their digital
communication bandwidth. This would enable
them to acquire and share information much
more faster than humans. Eventually, this will
have an impact on the dynamics of the decision-
making process. Decision-making effectiveness
mediates the link between information
processing capability’s value, rarity, inimitability
and non-substitutability and competitive
advantage (Cao et al., 2019: 124).
It makes researchers think that even if we only
uncover the same principles that give us our own
intelligence, AI would surpass us in certain ways.
We already observe this with technologies like
ChatGPT, which in some ways is already smarter
then us. Of course, neural nets possess more
knowledge and also exhibit limitations, but it's
just the beginning. Creating a 'new species' that
was smarter than us wouldn't bode well for us.
What's important in this technological transition,
it's timelines. If all mentioned changes come in
decades, maybe we have a chance to adapt
society to AI. If it comes in five years, it seems
hopeless to prepare. Human societies are
extremely adaptable. We are good at it, but it
takes time. For instance, if you look at the major
technological transition, the Industrial
Revolution, from the early 19th century until
today, it took us many generations to find out
how to design relatively prosperous
industrialized societies. Along the way, we had
some terrible failed experiments, while building
industrial societies, such as Nazism, Soviet
communism, Maoism, which resulted in the
deaths of millions of people (Pokorny, 1993).
These experiments were attempts to build
functioning industrial societies, but ultimately
failed.
Now, we are facing something even more
powerful than the trains, radio, television and all
other inventions of the Industrial Revolution.
Now we face the advent of AI. We all want to
believe: there is a new chance to organize safety
and prosperity with AI, but it will require time
and caution. We must ensure that we do not make
the same mistakes as in the past. Because, with
this kind of technology, there won't be a second
chance for us no more. Actually, in the 20th
century, we managed to survive those failed
experiments of the 'Second Industrial Revolution'
only because the technology was not powerful
enough to destroy us. Therefore, we must be
extremely careful and take things more slowly
when dealing with the potential consequences of
AI. In addressing these immense issues, there
needs to be both a corporate and societal
response, as well as a government response.
Ultimately, it's the responsibility of governments
to regulate this very dangerous development.
The problem is that the incentive system we've
built works reasonably well both for industrial
societies and liberal democracies. It is based on
competition, and companies would not survive if
they didn't play that game, because another one
would take their place. But now, there are also
individuals in those companies, who may think
that ethics, human rights and social values are
important (Schrempf-Stirling et al., 2022), so
humans can temper a bit that profit
maximization incentive, but it's a very strong
one. That is why it's hard to restrain this rapid
evolution of AI technologies, especially in such
populated and centralized countries as China.
What is the principal difference in the
understanding of human rights in China and the
West? Predominantly, in most liberal democratic
societies, issues of human rights, fundamental
freedoms, democracy and the rule of law are
universal in nature and do not belong exclusively
to the internal affairs of the particular country.
Such a state of affairs sequentially derives from
J. Locke's concept of justice, largely due to his
ideas about the ethical unity of people. As per
Locke, this unity is explained by the equality of
all human beings, by virtue of belonging to the
human race, and therefore, each individual is
guided by a single 'natural law' (Borinshtein et
al., 2021: 260).
In China's government, they believe that each
country has the sovereign right to set its own
human rights standards within its state
jurisdiction, as well as to interpret the degree of
compliance with human rights standards in their
country; and no one has the right to criticize
anyone in relation to human and civil rights,
because this, as they claim in China, would be an
'interference into internal affairs' (Stovpets,
2020: 69). According to the Chinese government,
countries should build mutually beneficial
economic policies, cooperate on security, and
respond to global threats, rather than teach each
22
www.amazoniainvestiga.info ISSN 2322- 6307
other about democracy and human rights,
because every nation has its own standard of
human rights. This is important to keep in mind,
in order to have understanding of the starting
points, from which human rights are evaluated
and interpreted in different cultures and
civilizations.
Here we could place a lot of arguments &
counter-arguments on Chinese so-called "social
rating system" (also known as the "system of
social credit"), and there would be a variety of
opinions: from that using such comprehensive
cyber technologies is the only possible
instrument to keep in order such a huge
population as Chinese (Jinghua, 2019) - to the
opinion that contemporary China is turning on
the true "cyber-prison" due to specific features of
Chinese cyber security and data laws (Parasol,
2022). And even in the 'liberal world', what we
saw in recent years, is that the political discussion
is just not there. If you look at the main issues
that politicians are concerned of, that their parties
are talking about - they are not talking about AI
seriously, while this should be one of the top
issues in every election campaign. Just because it
is not some abstract existential dangers down the
line, it's also immediate concerns of everyday
life. It's our jobs, it's about who is making
decisions influencing our life. You apply to a
bank to get a loan, and increasingly - it's an AI
making the decision about your loan. You try to
enter a university, or you apply to an employer to
get a job - increasingly it's an AI making the
decision! And you don't even understand, if they
rejected you - why did they reject you? How were
you evaluated and who made final decision?
Maybe AI was wrong?
Thus, the AI should be regulated more. And
when we talk about regulation, we need to
differentiate between regulating the development
of AI in controlled realm, its research in
laboratories, and regulating the deployment of
AI products into the public sphere. Now we need
a strict control in respect of its public
deployment. There are some very simple rules
that we need to make, for instance, that an AI
cannot counterfeit humans, meaning that if we're
talking with someone, we need to be aware of
whether it's real human or artificial intelligence.
If we don't, public dialog will fail and democracy
will appear impossible. It's two different things:
trying to convince human to change their
worldview and beliefs, and trying to do the same
to AI bot! The last one would obviously be
pointless. If you're having a discussion about the
elections with somebody, and you cannot tell
whether it's an AI or a real human, that's the end
of democracy. Because for a human being, it
makes no sense to waste time trying to change
the mind of a bot, as it doesn't have a mind. But
for the bot, every minute it spends talking with
us, it gets to know us better. It builds even a kind
of trust with a true person, and then it's easier for
the bot to make changes into human's views.
We have known for a couple of years that there
is a battle for attention going on in social media.
Now, with the new generation of AI, this
battlefront is shifting from attention - to trust. If
we don't regulate it, we are likely to be in a
situation when you have millions of hunting AI
agents trying to gain our sincerity and trust.
Because that's the easiest way to convince us to
buy a product, or vote for a politician, or
whatever. And if we allow this to happen, it will
lead to a new kind of manipulations. The same
way you cannot release powerful new medicines
or vehicles into the public sphere without going
through safety checks and getting approval, it
should be the similar to AI. Yes, we do have legal
acts about data, we have laws about
communication, legislation on information and
personal data protection. But they were not
designed to deal with some of newest problems,
produced by the advanced AI. The science and
technology moves, the market changes, and we
need a lot of agility from governments.
Another interesting thing, how democratic
system would use the AI technologies, and how
it would be used by authoritarian or totalitarian
systems. It's credible, totalitarian systems will be
much worse than democracies when it comes to
regulating AI and keeping it under control. The
traditional problem of totalitarian regimes is that
they tend to believe in their own infallibility.
They're convinced they never make mistakes,
and they don't have any strong self-correcting
mechanisms for identifying and correcting their
own mistakes. And with the totalitarian regime,
or some kind of super powerful 'world
government', the temptation of that system to
give too much power to AI, and then not be able
to regulate it, will be almost irresistible.
And once the totalitarian regime gives power to
an AI, there will not be any self-correcting
mechanism that can point out the mistakes that
the system will inevitably make. It should be very
clear that AI is not infallible. It has a lot of power,
it can process a lot of information, but
information isn't always truth (Handley-Miner
et al., 2023). These two notions do not necessary
coincide. There is a long way leading from
information to truth and to wisdom. And if we
give too much power to AI, it is bound to make
1Volume 12- Issue 72
/ December 2023
23
http:// www.amazoniainvestiga.info ISSN 2322- 6307
mistakes. Only democracies have this kind of
checks and balances that allow them to try
something, and if it doesn't work, to identify the
mistake and correct it.
We obviously need to focus society's attention on
all these problems. It is not about being alarmists.
Rather, it is important to acknowledge that, aside
from the long-term existential risk, many of our
most immediate problems in the economy and
society can significantly worsen due to AI.
Particularly, the job market should be a central
concern for everyone.
Artificial Intelligence definitely will not destroy
all jobs, but it will certainly eliminate some of
presently existing jobs, while creating new ones.
However, the transition and retraining of
individuals will be challenging. It is important to
remember that historical events, such as Hitler's
rise to power, were influenced by prolonged
periods of high unemployment, when around 3
years - up to 25% people in Germany were
unemployed. And even if we anticipate that in 20
years (but maybe we don't have these 20 years)
the situation in labour market will be better, we
cannot ignore the immediate consequences of
20% unemployment for this transition period. As
Humans have legal and moral responsibilities
over the design of Machines, including
robots (Shneiderman, 2020: 113), we must
thoroughly calculate the risks to the labour
market.
Regarding the job issues, the various camps
make very different claims. Among them, there
are people who suggest that a large fraction of
jobs would be modified. A recent study coming
out of "OpenAI" and some academics indicating
that (Eloundou et al., 2023). This may lead to
increased productivity, meaning we would either
have fewer people doing those jobs, or we would
do more with the same number of existing
workers. So, two options may arise: either jobs
shortening, or their preserving with rise of
productivity. It's rather hard to predict those
things precisely.
Also, one of the arguments we've heard on the
side of not worrying, is that societies change
slowly. Even if we had the technology for
something, it might take years or sometimes
decades for people to fully integrate it into
society, and have a significant impact on the job
market. We just can suppose, once you have a
system that essentially does the work better, like
the ability to manipulate language through email,
social media, databases, and other tasks, it's likely
that those kinds of jobs could be done better
fairly quickly in many sectors. Whether
companies will be able to adopt these changes
impetuously or gradually, it's not easy to foresee.
But if they do, we could potentially face all these
transition problems.
Psychologically, it's hard to accept: what if a bot
or an AI is coming for my job? Though it
immediately grabs people's attention, it's not a
simplistic idea, that there will no longer be any
jobs for humans. There will be a lot of new jobs,
but the transition is always difficult. How do we
retrain people, especially if to take into account
the global considerations? Because the AI
Revolution is being led by a very small number
of countries who are likely to become extremely
rich and powerful because of that, whereas it
could destroy the economy of less developed
countries. Even if we think about something like
the textile industry, what happens to the economy
of some populated countries, when it becomes
cheaper to produce textiles in Canada or the
USA, than in Brazil, Mexico, or even in India or
Bangladesh?
Do they really will be able to retrain millions of
textile workers in these countries - to become
web designers or digital developers? And who
will pay for the retraining? Maybe, in the
advanced developed countries, the gains from the
AI Revolution will enable the government,
hopefully, to cushion the blow for the people who
would lose their jobs, and enable them to retrain.
But it won't occur in the same manner in
developing heavy populated countries.
Eventually, it may be like with the Industrial
Revolution in the 19th century, which led to very
few countries basically conquering and
dominating the whole world. This could happen
again within a very short time, due to the
Automation revolution and the AI Revolution.
And again, it's not just the economy; it's also the
type of political control that you can get from
harvesting all the world's data and analyzing it.
Previously, to control a country, you needed to
send in the soldiers, or set up a military base
there. Now, increasingly, you just need to take
out the data. What happens to a country, when
the entire personal records, medical records, tax
codes, real estate documents, bank accounts, files
with other sensitive information, phones and
emails, whatever - personal data of every
politician, and entrepreneur, and journalist, and
judge, and policeman, and military officer of this
country - is held by somebody, for instance, in
Silicon Valley or in China? Is it still an
independent country, or did it become a kind of
data colony? So, these are the immediate dangers
24
www.amazoniainvestiga.info ISSN 2322- 6307
that should be clear to any citizen, no matter what
their views are on the long-term existential risks
of AI.
Of course, progress brings not only dangers, but
also benefits. Current technological progress is
inseparable from the solving of socio-economic
and ecological problems. It is commonly said
that information is a powerful resource that can
be transformed into knowledge and experience,
into competitive advantage. And this was true for
most of history, when there was very little
information, and monopolists (whether they were
shamans, magicians, high priests, or later - state
censorship) acted by withholding information,
blocking the flow of information. But now we
live in a very different era, when we are
bombarded with an enormous amount of
information. We have too much of it, and we
don't know sometimes how to make sense of it.
And censorship works differently now,
distracting people with too much information,
irrelevant information, misinformation. In this
age, clarity is more important than ever before,
because we need to know what to focus on.
Attention, sincerity and trust are becoming
perhaps the most scarce resource among all those
associated with the human mind.
Let's remember these two famous dystopian
novels: Orwell's "1984", and Huxley's "Brave
New World". But if we look at the way
information is treated in these two different
novels, we see that in Orwell's dystopia,
information is constantly being brutally
fabricated, rewritten, clipped. In Huxley's
dystopia, the manipulation of information is
more subtle: people are programmed from their
birth, and their minds from the beginning are
filled with different information, and each of the
five castes is part of a single plan. A described
system works in a manner that makes an
impression that the system understands you, and
appeals to your own passions and emotions. The
system works in such a way as to make you feel
that they are "on your side". It's not an old-style
structure (like Gestapo, or KGB, or Stasi),
because in many cases the system gives the
lasting impression that it is benevolent.
However, if we talk about today's smart
technologies, about artificial intelligence, in
many cases these systems actually understand us
better than many people do, and can improve our
lives in many ways. And that is where the
temptation lies. In some cases, it becomes
especially obvious when we take as an example
the health care system. Even today, advanced
technologies capable of handling large amounts
of data, recognizing photos, interpreting medical
device readings, summarizing the information
obtained, and evaluating symptoms; such
systems already surpass the professional skills of
a single doctor, and are comparable in their
effectiveness to a whole council of doctors. Such
systems make very accurate diagnoses with
minimal error, due to causal machine
learning (Richens et al., 2020).
Now let's imagine that a technology is developed
that continuously monitors what's going on
inside your body, and yes, it knows what's going
on inside your body better and more accurately
than your conscious mind. Because now, if
people have a serious disease spreading through
their body, very often people only find out about
it when it's already a big problem, and when a
person suddenly starts feeling pain without
knowing what it is? So he (she) goes to the
doctor, gets examined and analyzed, and then the
doctors discover that a person has a serious
disease, in an already neglected stage, and now it
may appear very difficult, rather painful, and
extremely expensive to cure it.
An alternative is a system that constantly
monitors what is going on in your body and is
able to detect that a serious disease is starting to
spread in some part of person's body. When this
serious disease is still in its early stages, a person
doesn't feel anything, but the biometric sensors
are already capturing the first clear signs that the
problem is starting - when it's still easy, cheap
and painless to get rid of.
It all looks great. So why would we want to block
this kind of development? After all, the same
analogy, hypothetically, could be applied to
completely different cases when it comes to
making many decisions in life. Starting with
routine matters and ending with more serious
questions: when to enter and exit a deal at the
stock exchange? where to invest your money?
what university to choose? what to study at this
university? Because sometimes, with our minds,
especially when we are young, we make very bad
choices. And if artificial intelligence could have
helped us do the right things at the past, it might
have saved us a whole decade of our life. So the
problem with AI and the deployment of all these
technologies - is that here is a huge temptation of
passing the rights of decision-making to AI by
people. So the big question arises: how can we
take advantage of the AI, extracting its possible
benefits, without suffering harmful
consequences?
1Volume 12- Issue 72
/ December 2023
25
http:// www.amazoniainvestiga.info ISSN 2322- 6307
One more trouble is that right now we live in the
situation, when simply disconnecting ourselves
completely from all devices would mean losing
competition to those who will continue to use
such devices. Refusing to use technologies,
either on an individual or a social level, will not
work because then we will be missing out so
many positive developments and perspectives.
In order to trace this dialectic of progress, we
may simulate two scenarios. For example, right
now, someone is wearing a ring, or a smart
bracelet, or another gadget that is actually a
biometric device, which measures bearer's heart
rate, saturation level, blood pressure, various
sleep states, blood glucose levels, and other
parameters. And the person considers this gadget
as a kind of advantage. By processing this data,
an athlete can optimize his training, and an
elderly person can maintain his/her health. This
is an example of the convergence of information
technology and biotechnology, which can affect
the stability of our health, our quality of life, and
our longevity.
But there is another crucial aspect: all this
statistical information is just a part of the big data
that is accumulated and stored on some server.
Does the user of this gadget know who exactly is
receiving the information this biometric device is
collecting about him, and what they are going to
do with that information? If, for instance, this
kind of information is collected by a large
corporation or some government, and we have no
idea what they are going to do with it, in some
cases it could have the darkest consequences.
Here is one of the gloomy scenarios: let's imagine
that the action takes place not in an open
democratic society, but in a totalitarian state,
where these rings or bracelets are mass-
produced, and every citizen is forced to wear
these gadgets constantly, transmitting all
information to a central database. Let's assume
that while collecting all these indicators, this
smart gadget is able to make an interpretation of
all changes in person's physiological parameters:
pulse, blood pressure, eye-pupils dilation or
constriction, hormonal surges, including levels of
dopamine, serotonin, endorphins, noradrenalin,
cortisol - in other words, everything that tells
complexly about some emotional state of an
individual. He enters the room and sees the
dictator's portrait on the wall, and the gadget on
bearer's hand registers signs of anger, hatred,
dissatisfaction, dislike towards the leader... The
next stop is 'Gulag', or asylum, or prison... This
is something like a pattern of anticipating the
"enemies of the state" even before any action is
taken. Not only before committing something,
but even before you think of any real action being
taken. It's a classical 'mindcrime', or
'thoughtcrime' (Orwell, 1941) made simply by
spontaneous emotion.
If such a state observes enough of its citizens for
an extended time period, it can easily build a
typical profile of a rebel or dissident, and begin
to "fix" their minds while they are still in
kindergarten. Such a state does not need to wait
for them to grow up - to pose any danger or
inconvenience to the system. So if we want to
dive into a dystopia, technology gives us a lot of
options. We can only imagine what Stalin, or
Mao, or Pol Pot would have done if they had such
biometric technology back then. Artificial
intelligence solves the problem of many dictators
of the past: it used to be very difficult and
expensive to keep surveillance on their citizens,
just because you had to keep a large staff of
wardens, secret police and other secret services.
Now, neural networks can do this, quickly and
relatively cheap. This is despite the fact that the
current stage of technology is still in its early
years, according to futurists. If the dystopian
example, mentioned recently, begins to unfold,
and such kind of societies turns out to be more
technologically and militarily strong, this would
mean the end of humanism as we understand it
today.
Anyway, the newest technologies introduction
will show a series of transformations (Harari,
2018). In terms of market forces, and the political
landscape, and orientation in the world, and
achieving career success, and earnings for a
living, - all of this, apparently, will be changing.
There are many issues that philosophers,
psychologists, sociologists, lawyers, and
economists will have to explore, including
questions of irrelevance and uselessness arising
from the technological revolution. If in the 20th
century the main struggle was against
exploitation, in the 21st century the main struggle
may be against the "irrelevance of humans".
People may find themselves simply unnecessary,
and the struggle against irrelevance will be much
more difficult than against exploitation.
We have no idea yet - what human life will look
like - when algorithms make more and more
decisions on our behalf. For thousands of years -
religious, political, and artistic traditions have
described life as a drama of decision-making.
Whether it is a play by Shakespeare, a novel by
Goethe, or Dostoevsky, or Márquez, a
Hollywood comedy, or a book on theology, -
they all tell of life as a kind of journey in which
26
www.amazoniainvestiga.info ISSN 2322- 6307
we make decisions at crossroads. We make
simple and complex choices in our daily lives,
choosing how to structure our day, what to eat for
lunch, who to vote for, who to marry, what career
to choose, who to fight against, etc. All this
drama revolves around making the right decision.
All previous history it has been the monopoly
prerogative of a human. That was our trouble,
and our privilege simultaneously.
At the same time, we are now entering an era in
which the automation gestalt is developing at a
crazy pace. And now there is a lot of talk about
jobs and, probably in the near future, about lost
jobs, but the problem is multilayered: it is socio-
economic, geopolitical, psychological, and
demographic. Let's take ChatGPT or its
analogues. This technology is supposed to shut
down a lot of current jobs, and leave a lot of
people out of work, whose intellectual work can
be automated.
Bu we need to look for examples in recent
history. Do we recall what profession was very
common and widespread in the pre-electric era?
That was the job of torchmen, lamplighters, -
people who were responsible for street lighting in
the cities of Europe, later in America. And it was
a huge business: sticks, tow (hemp fiber, or flax
fiber, also called oakum), tinderbox (for making
fire), oil, tar, later - kerosene, and kerosene
containers. More than three hundred thousand
people were employed in this torch-lighting
industry, those who were in charge of the manual
lighting and extinguishing of street torches,
lamp-lights, lanterns in the evening and in the
morning, and of course - those who were
involved in the production of all the components.
That is, there was a torch-lighting industry, and
separately - a huge candle industry (Frederic
Fournier's candle factory in Marseilles, at 1836,
was the largest in the world).
But at some moment, in the late 19th century,
Edison appeared in the United States with his
improved incandescent light bulb. The invention
of this type of electric lamps caused a devastating
blow - both to the torch-lighting business, and to
the candle industry. Within a few years, the
majority of the torchmen lost their jobs, as their
services were no longer needed in the
maintenance of electric lanterns. But instead of
three hundred thousand people employed in the
city's torch-lighting business, there emerged an
enormous electrical industry, which provided
nearly ten million new jobs at the time. It wasn't
just the production of light bulbs, but also electric
wires, and building power plants, transformation
units, power lines... it was not only electricians
and installers, but also engineers, scientists,
factory workers, and university professors who
taught the sciences related to electricity.
The second example is even more illustrative: as
we all know, about 150 years ago, the most
important means of transportation was horse-
drawn transport. And it was also a tremendous
business: horses, carriages, chariots, phaetons
and cabs, forges, horse fodder, stables and
taverns, coachmen and cabmen, and so on. But
horse-drawn journeys took too much time, so the
evolution could not tolerate such slow movement
in space. And then, 15% of the smartest
coachmen realized that, with the coming of
internal combustion engine, they should become
car drivers. And so there was this kind of phase
transition, leading to changes in related areas.
But perhaps it will not be like that now, because
the new technological transition will happen too
quickly, and will be many times more extensive?
It would not be quite correct to extrapolate
exactly from previous experiences to the near
future, because technological progress is not
linear. There is little we can say today for certain
about the labor market in 2050, and how it will
affect future generations. Of course, the most
simplistic scenario is that robots will come soon,
they will take all the jobs, and we will have
nothing to do, except living on a so-called
"unconditional basic income". And for some
people in some countries that may be the case. In
many countries today, the economy is primarily
dependent on cheap manual labor from people
working in workshops, at factories and plants
with a low level of automation. And the
economies of these countries may collapse or be
seriously disrupted. At the same time, in other
countries, such as the United States,
South Korea, many jobs would disappear, but
many new ones would arise.
The possible scenario has been brought up
before, of textile manufacturing coming from
Turkey, Mexico, or Bangladesh - back to
technologically developed countries. Because
now you may have 3D-printers and robots,
whose labor is much cheaper than that of people
in many developing countries. Now, conditional
Bangladesh may appear in a big trouble, but in
the U.S. you might even get more jobs (not in the
field of handmade textiles). These new jobs
might emerge in data processing or code writing,
because the critical source in the nearest future
textile industry development will be the personal
data of customers, from the one side, and the
software code used in manufacturing, from the
other.
1Volume 12- Issue 72
/ December 2023
27
http:// www.amazoniainvestiga.info ISSN 2322- 6307
After total automation of textile industry,
producers will need a lot of data about their
customers, and about what they want, i.e. their
biometric parameters, their individual aesthetic
preferences, their 'consumer portrait'. Then,
producers will be able to create a shirt
specifically for each particular customer's torso.
You don't have to rely on mass production like in
the days of the Industrial society. And you can
"print" that individual piece of clothing
somewhere in the United States, not far from
your client or customer, and you don't have to
bring it from Asia to USA in containers. But you
really need well-trained people who deal with
data - with personal data, and with big data. So
there might be new jobs in the most
technologically advanced countries. While the
most serious socioeconomic problems caused by
the loss of industries due to relocation will most
likely occur in countries like Bangladesh. These
are the places, which are most vulnerable to
automation. And the profits from automation will
go not to Dhaka, but to California, or Texas, or
Vancouver!
The truth is that these jobs in developed countries
will also gradually deal with automation and
change very quickly. The situation in the global
labor market will be extremely tense and
unsteady. The automation revolution is unlikely
to be a single major turning point, but will happen
in waves, leading to the disappearance of many
old type jobs and the emergence of many new
ones. We will probably see a few years of
turbulence, and then everything will come to a
new equilibrium. And every 10 years or so, it will
happen again. Because artificial intelligence is
not even close to unlocking its full potential, -
futurologists say. Recently conducted survey
finds that scientists are concerned, as well as
excited, by the increasing use of artificial-
intelligence tools in their work (Van Noorden &
Perkel, 2023).
So every 10 years we might lose our jobs, or our
jobs will be completely transformed by a new
wave of the latest advances in machine learning.
And if we want to stay in the game, we have to
reinvent ourselves repeatedly. As medicine
advances and life expectancy increases, people
will be retiring at older age from decade to
decade. Thus, we might have to reinvent
ourselves several times in the course of our life.
Therefore, the idea of not only having the same
job for life, but one profession for life, in most
cases, is losing its former relevance.
Here we discover this relationship between a
human right to work, a right to be engaged in
productive employment, to achieve self-
realization, on the one hand, and the
technological imperatives of the information
civilization, on the other, which requires much
more sophisticated skills of adaptability,
retraining, and qualification improvement, than it
used to be. Some of possible measures for
educational improvements were examined
before (Borinshtein et al., 2022: 152).
Developmental psychology says: as you get
older, your professional adaptability decreases.
People from different generations have not
similar abilities to adapt to technological change.
Such features as endurance, adaptability, and
emotional intelligence become overriding in
ways we have never seen before, which brings
into question our entire educational system. More
and more often, across the globe, there are
admissions that the education system has
seriously eroded. It is not adapted to the realities
of the 21st century. But we do not have a full-
fledged alternative model; in fact, we need
world-wide applicable solution.
Increasingly there is the following temptation: if
we cannot improve it, let's take education out of
the hands of humans, and put it in the hands of
algorithms. But then we will have completely
new problems. Some of them are still the legacy
of human thinking, because people very often
develop algorithms with their own human biases
and prejudices embedded inside, and they don't
even realize it.
There is a new concept named "algorithmic
bias" (Baer, 2019), which may exist even when
the algorithm developer has no intention of
discrimination. Nevertheless, by carefully using
extensive statistical data on the purchase of
different kinds of services and goods by certain
groups of users, the algorithm may end up
recommending a particular product or service to
a very homogenous group of consumers, and not
recommending it to other groups (for example,
recommending expensive colleges to potential
white students, but not offering such a product to
black, because statistically they are not
considered solvent enough to buy the service like
learning at expensive college). This
unintentional discrimination stems from the
analysis of big data, the effective processing of
which allows to increase the average check, sales
volume and conversion rate due to personalized
offers that are created on the basis of knowledge
about users.
In addition to the so-called "algorithmic bias,"
among the new type of problems is the fact that
28
www.amazoniainvestiga.info ISSN 2322- 6307
the decision-making process is becoming
completely non-transparent to humans. And
more and more power will be concentrated in
non-human hands, in the hands of these
algorithms. We are already seeing this happen,
for instance, in the global financial markets,
where so many transactions are made with
algorithms by trading robots, i.e. artificial
intelligence. And very often, even the best human
experts cannot explain what is going on and why
the algorithm offers this particular solution and
not another.
We have considered only a fraction of those
challenges, which humanity will face in the
course of ongoing digitalization. But they are the
most explicit and illustrative enough to help us
realize the scale of possible problems, and the
prospects for mitigating them. Perhaps these
issues should become the subject of further
research.
Conclusions
The 'digital age' has brought about numerous
advancements, yet it has also introduced complex
challenges into modern life. Contemporary
challenges for human rights in the context of
digitalization are mostly connected with data
protection and security, privacy concerns, 'digital
divide' and inequality, algorithmic bias and
discrimination, unpredictable change of labour
market, and excessive misuse of technological
capabilities by authoritarian regimes.
Among the main hazards that digital
technologies may bring, we suppose, several are
the most probable. In particular, the collection,
storage, and use of personal data by governments
and corporations have raised reasonable concerns
about the right to privacy. Technologies like
facial recognition, biometric data collection, and
pervasive surveillance threaten individual
privacy rights. Balancing the right to freedom of
expression with the need to regulate harmful
content online poses another significant
challenge. Cyber-security threats, data breaches,
and the commoditization of personal data create
vulnerabilities that can lead to violations of
individuals' rights. Emotionally pre-trained
neural networks could be used by non-
democratic regimes to manipulate with
individual consciousness and public opinion.
'Digital divide' finds its expression in unequal
access to newest technologies, which exacerbates
existing socioeconomic disparities between
countries and inside them. Automated decision-
making systems and algorithms can enshrine
biases, leading to new types of discrimination in
areas such as employment, education, housing,
and access to services. We still have no
comprehension of what will happen when the
decision-making process overwhelmingly will be
shaped by artificial intelligence algorithms,
according to how algorithms understand the
whole world. There's a risk of losing control over
AI after a certain moment; so the danger is in the
lack of transparency.
In order to mitigate these hazards mentioned
above, some priority measures could be taken at
government levels:
countries, corporations, and international
organizations should collaborate to establish
common standards and regulations for
digital technologies; it can facilitate a more
consistent and effective global approach to
reducing hazards associated with rapid
evolution of this kind of technologies;
governments should enact and enforce
robust data protection laws that regulate the
collection, storage, and use of personal data
by both public and private entities;
countries should invest in cybersecurity
measures to be protected against cyber
threats, data breaches, and unauthorized
access; this includes regular security audits,
encryption standards, and incident response
plans to minimize the impact of cyberattacks
(as for now, not AI itself, rather then 'bad
human actors' are the menace to
cybersecurity);
ethical guidelines and standards for the
development and deployment of artificial
intelligence must be established;
governments should encourage responsible
AI use in public and private sectors;
independent oversight and regulatory bodies
must be established, to be responsible for
monitoring the implementation of digital
technology policies; these bodies should
have the authority to investigate complaints,
enforce regulations, and adapt policies to
emerging challenges;
algorithmic accountability must be ensured;
society needs to introduce mechanisms that
hold organizations accountable for the
algorithms they use; this includes
transparency requirements, audits to identify
biases, and mechanisms for individuals to
challenge decisions made by automated
systems;
governments have to allocate resources for
research on the societal impacts of digital
technologies, and invest in educational
programs to increase public awareness;
well-informed citizens are better equipped to
1Volume 12- Issue 72
/ December 2023
29
http:// www.amazoniainvestiga.info ISSN 2322- 6307
understand the risks and benefits of
technology, and participate in shaping
regulatory frameworks;
policies to reduce the 'digital divide' should
be implemented, ensuring equitable access
to technology and cyber-space; this may
involve infrastructure development,
subsidies for underprivileged communities,
and initiatives to promote digital literacy.
Among the undoubted benefits of digitalization
for human rights, we expect a comprehensive
digital inclusion, protecting online freedoms,
promoting digital literacy, fostering ethical and
responsible technological innovation, and
establishing robust legal frameworks that
safeguard individuals' rights in the digital sphere.
Mindful approach to AI deployment could
provide us better life standards, education,
healthcare and longevity.
The aim of every contemporary political system
is to comprehend and regulate the explosive
potential of technology, to provide responsible
governance of technology. There is no any sense
to talk about human rights and freedoms, unless
there won't be designed safety protocols and
some reasonable limitations for using AI-means
and technologies until they're studied enough.
The implications of rapid digitalization for
human rights, in particular, the impact of
artificial intelligence on decision-making needs
to become an object of a separate detailed study,
which will continue our research on the
phenomenon of digitalization.
Bibliographic references
Agrawal, A., Gans, J.S., & Goldfarb, A. (2019).
Exploring the Impact of Artificial
Intelligence: Prediction versus Judgment.
Information Economics and Policy, 47, 1-6.
https://doi.org/10.1016/j.infoecopol.2019.05.
001
Alexander, V., Blinder, C., & Zak, P.J. (2018).
Why trust an algorithm? Performance,
cognition, and neurophysiology. Computers
in Human Behavior, 89, 279-288.
https://doi.org/10.1016/j.chb.2018.07.026
Audry, S., & Bengio, Y. (2021). Art in the age of
machine learning. Cambridge,
Massachusetts: The MIT Press, 193 pages.
https://acortar.link/CSuW07
Baer, T. (2019). Understand, manage, and
prevent algorithmic bias: a guide for business
users and data scientists. New York: Apress,
245 pages. https://acortar.link/52MGdv
Bigman, Y.E., & Gray, K. (2018). People are
averse to machines making moral decisions.
Cognition, 181, 21-34.
https://doi.org/10.1016/j.cognition.2018.08.0
03
Bishop, C.M., & Bishop, H. (s.f). Deep Learning
Foundations and Concepts. Cham: Springer
International Publishing AG, 657 pages.
https://www.bishopbook.com/
Borinshtein, Y., Stovpets, O., Kisse, A.,
Balashenko, I., & Kulichenko, V. (2022).
Educational marketing as a basis for the
development of modern Ukrainian society
and the state. Amazonia Investiga, 11(54),
146-157.
https://doi.org/10.34069/AI/2022.54.06.14
Borinshtein, Y., Stovpets, O., Kukshinova, O.,
Kisse, A., & Kucherenko, N. (2021).
Phenomena of freedom and justice in the
interpretations of T. Hobbes and J. Locke.
Amazonia Investiga, 10(42), 255-263.
https://doi.org/10.34069/AI/2021.42.06.24
Cao, G., Duan, Y., & Cadden, T. (2019). The
link between information processing
capability and competitive advantage
mediated through decision-making
effectiveness. International Journal of
Information Management, 44, 121-131.
https://doi.org/10.1016/j.ijinfomgt.2018.10.0
03
Eloundou, T., Manning, S., Mishkin, P., &
Rock, D. (2023). GPTs are GPTs: An early
look at the labor market impact potential of
large language models. OpenAI Research.
https://openai.com/research/gpts-are-gpts
Ford, M.R. (2018). Architects of intelligence: the
truth about AI from the people building it.
Birmingham: Packt, 546 pages.
https://acortar.link/Kk9UY8
Fui-Hoon Nah, F., Zheng, R., Cai, J., Siau, K., &
Chen, L. (2023). Generative AI and
ChatGPT: Applications, challenges, and AI-
human collaboration. Journal of Information
Technology Case and Application Research,
25(3), 277-304.
https://doi.org/10.1080/15228053.2023.2233
814
Ganin, I., Bengio, Y., & Lempitsky, V. (2019).
Natural image processing and synthesis
using deep learning. (Dissertation) Montreal
university. https://doi.org/1866/23437
Goodfellow, I., Bengio, Y., & Courville, A.
(2017). Deep learning. Cambridge,
Massachusetts: MIT Press, 775 pages.
https://acortar.link/tOR0NO
Handley-Miner, I.J., Pope, M., Atkins, R.K.,
Jones-Jang, S.M., McKaughan, D.J.,
Phillips, J., & Young, L. (2023). The
intentions of information sources can affect
30
www.amazoniainvestiga.info ISSN 2322- 6307
what information people think qualifies as
true. Scientific Reports, 13, 7718.
https://doi.org/10.1038/s41598-023-34806-4
Harari, Y.N. (2018). Homo Deus: a brief history
of tomorrow. New York: Harper Perennial,
449 pages. https://acortar.link/jk1e4n
Jinghua, L. (2019). What are China’s cyber
capabilities and intentions? Carnegie
Endowment for International Peace.
https://acortar.link/k7lHy7
Lin, Z., & Bengio, Y. (2019). Deep neural
networks for natural language processing and
its acceleration. (Dissertation) Montreal
university. https://doi.org/1866/23438
Orwell, G. (1941). Essays. London: Penguin
Books, 466 pages.
https://www.goodreads.com/book/show/209
26515-essays
Parasol, M. (2022). AI development and the
'fuzzy logic' of Chinese cyber security and
data laws. Cambridge: Cambridge University
Press, 408 pages.
https://acortar.link/Gmr4SF
Pidbereznykh, I., Koval, O., Solomin, Y.,
Kryvoshein, V., & Plazova, T. (2022).
Ukrainian policy in the field of information
security. Amazonia Investiga, 11(60),
206-213.
https://doi.org/10.34069/AI/2022.60.12.22
Plantec, Q., Deval, M.-A., Hooge, S., & Weil, B.
(2023). Big data as an exploration trigger or
problem-solving patch: Design and
integration of AI-embedded systems in the
automotive industry. Technovation, 124.
https://doi.org/10.1016/j.technovation.2023.
102763
Pokorny, D. (1993). Efficiency and Justice in the
Industrial World, v.1. The Failure of the
Soviet Experiment. New York: Routledge,
312 pages.
https://doi.org/10.4324/9781315485614
Richens, J.G., Lee, C.M., & Johri, S. (2020).
Improving the accuracy of medical diagnosis
with causal machine learning. Nature
Communications, 11, 3923.
https://doi.org/10.1038/s41467-020-17419-7
Schrempf-Stirling, J., Van Buren, H.J., &
Wettstein, F. (2022). Human Rights: A
Promising Perspective for Business &
Society. Business & Society, 61(5),
1282-1321.
https://doi.org/10.1177/00076503211068425
Shneiderman, B. (2020). Human-Centered
Artificial Intelligence: Three Fresh Ideas. AIS
Transactions on Human-Computer
Interaction, 12(3), pp. 109-124.
https://doi.org/10.17705/1thci.00131
Shrestha, Y.R., Ben-Menahem, S.M., &
von Krogh, G. (2019). Organizational
Decision-Making Structures in the Age of
Artificial Intelligence. California
Management Review, 61(4), 66-83.
https://doi.org/10.1177/0008125619862257
Stovpets, O. (2020). Sinitic civilization's
worldview features and their system-forming
role in the complex of social relations in
modern China. Interdisciplinary Studies of
Complex Systems, 17, 59-72.
https://doi.org/10.31392/iscs.2020.17.059
Svyrydenko, D., & Stovpets, O. (2020). Chinese
Perspectives in the “Space Race” through the
Prism of Global Scientific and Technological
Leadership. Philosophy and Cosmology, 25,
57-68. https://doi.org/10.29202/phil-
cosm/25/5
Tuncer, S., & Ramirez, A. (2022). Exploring the
role of Trust during Human-AI collaboration
in managerial decision-making processes. In:
Proceedings of 24th International
Conference on Human-Computer Interaction
(HCII), 13518, 541-557.
https://doi.org/10.1007/978-3-031-21707-
4_39
Van Noorden, R., & Perkel, J.M. (2023). AI and
science: what 1,600 researchers think. A
Nature survey.
https://www.nature.com/articles/d41586-
023-02980-0
Wang, Y. (2021). Artificial intelligence in
educational leadership: a symbiotic role of
human-artificial intelligence decision-
making. Journal of Educational
Administration, 59(3), 256-270.
https://doi.org/10.1108/JEA-10-2020-0216
Xu, W., Dainoff, M.J., Ge, L., & Gao, Z. (2022).
Transitioning to human interaction with AI
systems: new challenges and opportunities
for HCI professionals to enable human-
centered AI. International Journal of
Human-Computer Interaction, 39(3),
494-518.
https://doi.org/10.1080/10447318.2022.2041
900