Positive expectations associated with scientific and technological progress are combined with clearly perceived threats of the approaching future, which are described and analyzed by representatives of various discourses – from mass media to academic and political circles. The attention of the academic community to this range of problems is evidenced by the active discussion that took place at the panel discussion “Malicious use of artificial intelligence and international psychological security” of the UNESCO Conference in Khanty-Mansiysk and continued at the eponymous research seminar at the Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation. The Association of Studies, Research and Internationalization in Eurasia and Africa – ASRIE (Rome)supported both of the events in partnership with other organisations and institutions.
The II International Conference “Tangible and Intangible Impact of Information and Communication in the Digital Age” was held within the framework of the UNESCO Intergovernmental Information for All Programme (IFAP) and XI International IT Forum with the participation of BRICS and SCO countries in Khanty-Mansiysk on June 9-12, 2019.
A number of academic institutions provided academic support for the event. These are the International Center for Social and Political Studies and Consulting (ICSPSC), the European-Russian Communication Management Network (EU-RU-CM Network) and the Russian – Latin American Strategic Studies Association (RLASSA).
The conference was supported also by the Institute for Political, Social and Economic Studies – EURISPES (Rome), the Geopolitics of the East Association (Bucharest), the International Association “Eurocontinent” (Brussels) and the International Institute for Scientific Research – IIRS (Marrakech).
The Governor of Ugra Natalia Komarova took part in the conference. Opening the panel discussion “Malicious use of artificial intelligence and international information and psychological security”, she stressed that in connection with the mass involvement of people in the global communication space, the subtlety of the ideological setting of society is of particular importance. According to Natalia Komarova, there is a need to ensure psychological security.
With academic support of EU-RU-CM Network the conference was attended by coordinators and network members: Darya Bazarkina (Russia), Evgeny Pashentsev (Russia), Olga Polunina (Russia), Marco Ricceri (Italy), Gregory Simons (Latvia/New Zealand/Sweden), Pierre-Emmanuel Thomann (Belgium) and Marius Vacarelu (Romania).
At the opening of the conference Evgeny Pashentsev, Leading Researcher, Diplomatic Academy of the Ministry of Foreign Affairs of the Russian Federation; Director, International Centre for Social and Political Studies and Consulting (Moscow, Russia), Coordinator of the European-Russian Communication Management Network (EU-RU-CM Network), senior researcher, St. Petersburg State University presented a paper on Artificial Intelligence: Current and Promising Threats to International Psychological Security. He noticed that the international security today is under threat due to destructive processes in the economic, social, military and other spheres of public life. Negative processes are developing at the national, regional and global levels. It is essential to have an adequate understanding of the existing problems by all sectors of society.
International psychological security (IPS) means protecting the system of international relations from negative information and psychological influences associated with various factors of international development. The task today is to repel threats from the real and constantly developing “weak” artificial intelligence, which is a threat not in itself, but because of the actions of antisocial external and internal actors that turn it into a threat to the international security. In the not so distant future, there may be problems associated with “strong intelligence”, the possibility of which in the coming decades, forecast more and more researchers.
Darya Bazarkina, Professor, Russian Presidential Academy of National Economy and Public Administration; Senior Researcher, Saint Petersburg State University (Moscow, Russia), presented her vision on the Artificial Intelligence as a Terrorist Weapon (Information and Psychological Consequences of Future Terrorist Attacks and Ways to Minimize Them). In the field of working with information, AI capabilities are very wide. The analysis of big data based on the contents of social media has already been used by North African militants in the attack on the Tunisian city of Ben Gardane in March 2016. Available evidence, including effective ways of killing key members of the security service, showed that the terrorists had pre-studied the habits and schedules of the victims. This case shows that with the development of social media and their monitoring mechanisms (processing of “big data”, which enhances AI), the possibilities of open-source intelligence are becoming more accessible to all sorts of non-state actors. It is only a matter of time before less technically advanced extremist groups connect these mechanisms. For example, the far right in Europe exchange information about possible targets for attacks on sites such as “Redwatch”, created in Poland on the British model (the site contains photos of activists of the left movement, which is collected by the far right). The analysis of possibilities of AI already allows to draw conclusions on the facilitation of the collection of data on potential victims and for the selection of priority targets for cyber-attacks based on machine learning.
Fatima Roumate, Associate Professor, Mohamed V University; President, Institut International de la Recherche Scientifique (Marrakech, Morocco), analyzed the issue of Malicious Use of Artificial Intelligence as New Challenges for International Relations and International Psychological Security. Nowadays, she said, AI offers new opportunities for international and bilateral cooperation, and facilitates the inclusion of all actors within global governance. However, the malicious use of AI represents a threat to the international psychological security whether we are speaking about social, economic or military activities.
The future of international psychological security is conditioned by the state’s response to the challenges imposed by the cyber era. The growing investment in AI for commercial and military will expand the challenges and threats to international psychological security. These challenges are significant because AI is growing rapidly while the development and updating of international mechanisms is very slow. This leads us to another challenge which is the creation of right balance, first between commercial and military funding dedicated to AI and second between investment in AI and protection of human rights in peace and in war.
Malicious use of AI invites all actors (states, international institutions, NGOs, transnational corporations and individuals) to collaborate and give a written riposte in the political, juridical and institutional level. The goal is to ensure international psychological security. The challenges imposed by malicious use of AI are pushing international society towards a new global order with several and fundamental changes of players and rules in the international game.
Frederic Labarre, analyst and Education management consultant at the Royal Military College of Canada, co-chair of the Regional Stability in the South Caucasus Study Group (Partnership for Peace Consortium) (Canada), in his paper The Mechanics of Social Media and AI-aided Radicalization: Impact on Human Psychology (A digest from “Mapping Social-Media Enabled Radicalization: A Research Note” by P. Jolicoeur and F. Labarre, 2017) stressed that technology expands the individuals’ horizons and seemingly provides direct and instantaneous access to the political system. The problem comes when algorythms begins “feeding” individuals with “expected” support, thereby reinforcing pre-existing biases within individuals. Social-media provides a wealth of information on individual habits, allowing virtual communities to provide individuals with messages and images that are soothing and apparently give meaning and structure to what is seemingly a raw and chaotic world.
Under pressure, social media like Facebook have resorted to artificial intelligence (AI) to reduce the incidence of fake, hateful, or radical content on its site. Very soon, it will be difficult to determine what is genuine content from what is human-generated content. Only the socio-political outcome is different. Without time to reflect, without reasoned contact with competing or contrary opinion, and yet, even with assurances of perfectly clean data and statistics on a problem, individuals will always side with their preferred biases. The aim of the state is to avoid unnecessary bloodshed or upheavals. But technology provides other states and groups with the power to cause mayhem elsewhere. Technological advances in communications are not merely a double-edged sword; it is only a blade with no handle, sure to slip from the bloody hand that wields it.
Aleksandr Raikov, Leading Researcher, Institute of Control Sciences, Russian Academy of Sciences (Moscow, Russia) presented the paper on the Strong Artificial Intelligence (ASI), Its Features and Ethical Principles of Safe Development.
AI is a technology that enhances a person’s creative possibilities and helps him in his work. AI makes it possible to understand and use the power of the human mind, to get closer to the mystery of the human spirit. However, features of the next generation of AI – Artificial Super-Intellect (ASI) begin to appear:
“Intellect, which is much smarter than the best human mind in almost all areas, including scientific creativity, wisdom and social skills”. With the advent of ASI, its danger to society is not excluded.
Scientists, government and public figures, professionals and experts develop ethical principles that should be followed in order for the development of AI to move in a safe and moral manner. For this, in particular, the well-known Asilomar AI Principles have already been formed. To support these principles, as well as taking into account the public and state significance of the issue of possible risks of ASI development, which may increase in unpredictable and abrupt manner, we consider it reational to offer the government authorities, the scientific and expert-analytical community the following and other principles in the development of ASI:
- ICI should be absolutely, 100% safe for human, including environmental and moral cleanliness, regardless of where it is used: government, robotics, advertising, manufacturing, intelligent assistants, etc.
- A human should always have the right of responsible choice: to make a decision independently or to entrust it to the ASI system, and any such system should be designed taking into account that the human has the opportunity to intervene in the decision process implemented by this system.
- Absolutely all the risks of the ASI development should be controlled and preempted by appropriate organizational, scientific and engineering techniques.
- Systems of ASI should be under especially strict control of the person.
Pierre-Emmanuel Thomann, President/Founder, Eurocontinent (Brussels, Belgium) posed the question on Artificial Intelligence and Geopolitics: What Role for Europe?. The issue of AI and big data mastery is related to the question of historical memory, identity, education and ultimately control on populations (mind and behaviour) and states that will be under the pressure of extraterritorial influence and malicious geopolitical and transnational strategies.
The analysis of big data based on the accumulation of information on citizens from contents of social media (Twitter, Facebook, Linkedin…), facial recognition systems but also digital libraries (digitalization of books and film archives), historical and diplomatic archives, and satellite imagery and Geographical information systems (GIS) will be enhanced by the use of AI. It will add the space-time dimension in the geopolitical arena.
There is therefore is a risk of colonization of minds of citizens in these states unable to master big data and IR programmes to perform data mining and new analysis and research in an autonomous capacity. This will reinforce the inability to think independently for these citizens and they will more subject to manipulation. As result, they could easily change political loyalty for external geopolitical and ideological visions of the world imposed by poles of power possessing full spectrum dominance and to which their state or nation will be subordinated.
Facing the risk of strengthening geopolitical imbalances due to unequal access to AI, it is necessary to seek through international cooperation for a more balanced distribution of AI research results with common international platforms.
Erik Vlaeminck, Researcher, University of Edinburgh; Research Associate, International Cultural Relations Ltd (London, UK), in his paper Culture in the New Technological Paradigm: From Weaponization to Valorization stressed that the future advancements in the field of AI might worsen these threats considerably as state and non-state actors with bad intentions might turn against society in the pursuit of political interests. In order to counter these potential threats, it will be of importance to conduct more research and advocate for international cooperation.
The use of AI-driven technologies and their relation to identity-based conflicts will be scrutinised. As the future developments in the area might worsen these processes, it is important to take the risks of following potential threats into account: the incitement of a global culture war, the manipulation and rewriting of cultural memory, cognitive framing through cultural products, and cultural (social) engineering on a mass scale.
At the same time, it will be important to think about how culture (and arts) can protect us against these same threats as well as toward the building of a sustainable future. Culture and the concept of cultural relations could take a more central role in the strategic efforts to counter propaganda and attempts of psychological operations.
Discussion of the problems of malicious use of AI continued on June 14 at the research seminar “Artificial Intelligence and Challenges to International Psychological Security”. The seminar was organized by the Centre for Euro Atlantic Studies and International Security at the Diplomatic Academy of the MFA of Russia and International Centre for Social and Political Studies and Consulting with the academic support of the European-Russian Communication Management Network and the Department of International Security and Foreign Policy of Russia, Russian Presidential Academy of National Economy and Public Administration.
The participants of the seminar adopted a final document aimed at explaining to the authorities and civil society institutions the threats associated with the fall of AI tools into the hands of criminal actors.
Darya Bazarkina DSc, Professor at the Chair of the International Security and Foreign Policy of Russia, RANEPA; Research Coordinator on Communication Management and Strategic Communication, International Centre for Social and Political Studies and Consulting
Mark Smirnov. Research Intern of the International Centre for Social and Political Studies and Consulting