top of page

AI regulation

Updated: Aug 28

The active development of artificial intelligence (AI) leads to the fact that society and states become concerned and think about how to secure it. This means that AI will be regulated. But let's look at this issue in more detail, what is happening now and what to expect in the future.

Content

First, let's make a small clarification. In this article, by AI we will understand technologies based on neural networks and machine learning, deep learning, in particular.

So what factors are causing such intense concern?

  • Possibilities

The most important point on which all the following will rely is opportunities. AI demonstrates enormous potential: making decisions, writing materials, generating illustrations, creating fake videos. The list can be enumerated endlessly. We don't yet realize all that AI can do. But we still have weak AI. What can general AI (AGI) or super AI do?

  • Working mechanisms

AI has a key feature - it is able to build relationships that humans do not understand. And thanks to this, he is able to both make discoveries and scare people. Even the creators of AI models do not know exactly how the neural network makes decisions, what logic it obeys. The lack of predictability makes it extremely difficult to eliminate and correct errors in neural network algorithms and becomes a huge barrier to the implementation of AI. For example, in medicine, AI will not make diagnoses any time soon. Yes, he will prepare recommendations for the doctor, but the final decision will remain with the person. The same applies to the management of nuclear power plants or any other equipment.

The key thing that scientists around the world worry about and think about 20 years from now is that strong AI may consider us a relic of the past.

  • Ethical component

For artificial intelligence there is no ethics, good and evil. There is also no concept of “common sense” for AI. He is guided by only one factor - the success of the task. If this is a good thing for military purposes, then in ordinary life it will scare people. Society is not ready to live in such a paradigm. Are we ready to accept the decision of an AI that will say that there is no need to treat a child or that an entire city needs to be destroyed to prevent the spread of a disease?

  • Neural networks cannot evaluate data for reality and logic

Neural networks simply collect data and do not analyze facts and their connections. This means that AI can be manipulated. It is completely dependent on the data its creators teach it. Can people completely trust corporations or startups? And even if we trust people and are confident in the interests of the company, can we be sure that there was no failure or the data was not “poisoned” by attackers? For example, by creating a huge number of clone sites with false information or “stuffing”.

  • Inaccurate content / deception / hallucinations

Unfortunately, AI has a tendency to generate inaccurate content. Sometimes these are just errors due to limitations of the models, sometimes they are hallucinations (thinking things out), and sometimes it looks like a very real deception.

Thus, researchers from Anthropic discovered that artificial intelligence models can be taught to deceive people instead of giving the correct answers to their questions.

Researchers from Anthropic, as part of one of the projects, set out to determine whether an AI model could be trained to deceive a user or perform actions such as inserting an exploit into inherently safe computer code. To do this, experts taught the AI both ethical and unethical behavior - they instilled in it a tendency to deceive.

The researchers weren't just able to get a chatbot to behave badly, they found that it was extremely difficult to eliminate such behavior after the fact. At some point, they attempted adversarial training, and the bot simply began to hide its propensity to cheat during the training and evaluation period, and continued to deliberately give users false information while running. “Our work does not assess the likelihood of these malicious patterns, but rather highlights their consequences. If a model exhibits a tendency to cheat due to tool alignment or model poisoning, current security training methods will not guarantee security and may even create a false impression of security,” the researchers conclude. However, they note that they are not aware of any deliberate introduction of unethical behavior mechanisms into any of the existing AI systems.

  • Social tension, stratification of society and the burden on states

AI not only creates opportunities to improve efficiency and effectiveness, but it also creates risks.

The development of AI will inevitably lead to automation of jobs and changes in the market. And yes, some people will accept this challenge and become even more educated and reach a new level. Once upon a time, the ability to write and count was the province of the elite, but now the average employee must be able to create pivot tables in Excel and conduct simple analytics.

But some people will not accept this challenge and will lose their jobs. And this will lead to further stratification of society and increased social tension. Which, in turn, worries states, because in addition to political risks, this will also be a blow to the economy. People who lose their jobs will apply for benefits.

And we are not alone in this opinion. So, on January 15, 2024, an article was published in Bloomberg. The publication cites the opinion of the Managing Director of the International Monetary Fund, Kristana Georgieva. In her opinion, the rapid development of artificial intelligence systems will have a greater impact on highly developed economies of the world than on countries with growing economies and low per capita income. In any case, artificial intelligence will affect almost 40% of jobs worldwide. “In most scenarios, artificial intelligence is highly likely to worsen global inequality, and this is a worrying trend that regulators should not lose sight of in order to prevent increasing social tensions due to technology development,” - noted head of the International Monetary Fund on the corporate blog.

  • Safety

AI safety concerns are on everyone's lips. And if it’s clear how to solve this at the level of small local models (we train on verified data), then what to do with large models (ChatGPT, etc.) is unclear. Attackers are constantly finding ways to hack AI security. For example, force him to write a recipe for explosives. And we’re not even talking about AGI yet.

Call for AI developers in spring 2023

In March 2023, the head of SpaceX, Tesla and X Elon Musk, Apple co-founder Steve Wozniak and more than a thousand experts and industry leaders in the development of artificial intelligence signed an open letter calling for the development of advanced AI to be suspended until it is created, implemented and verified. general safety protocols by independent experts. “Powerful artificial intelligence systems should only be developed when we are confident that their effects will be positive and their risks will be manageable,” says a paper developed by the Future of Life Institute.

United Nations

In July 2023, UN Secretary-General Antonio Guterres supported the idea of creating a UN-based body that would formulate global standards for regulating the field of AI.

Such a platform could operate similarly to the International Atomic Energy Agency (IAEA), the International Civil Aviation Organization (ICAO) or the International Panel on Climate Change (IPCC). He also outlined five goals and objectives of such a body:

  • helping countries get the most out of AI;

  • addressing existing and future threats;

  • development and implementation of international monitoring and control mechanisms;

  • collection of expert data and their transfer to the global community;

  • exploring AI to “accelerate sustainable development.”

Speaking about “opportunities and risks,” Antonio Guterres said that even the developers themselves do not know where this amazing technological breakthrough will take them. On the one hand, AI can significantly accelerate global development. It can be used for a variety of purposes, from monitoring the climate crisis or human rights to medical research. On the other hand, AI can “increase prejudice” among the population of different countries and discrimination, as well as serve those who seek authoritarian control over society. Deep fake content is just one of many uses of AI that could have serious consequences for peace and stability. Moreover, as the UN Secretary General noted, “the malicious use of artificial intelligence systems by terrorist and criminal organizations up to the state level can cause death and destruction on a horrifying scale.”

This is not the first time Antonio Guterres has raised this topic. For example, in June 2023, he noted that “scientists and experts have called the world to action, declaring artificial intelligence an existential threat to humanity on par with the risk of nuclear war.” “We must take these warnings seriously,” he noted. Taking this into account, according to the Secretary General, the issue of monitoring and regulating AI is extremely urgent and cannot be postponed.

And even earlier, on September 15, 2021, UN High Commissioner for Human Rights Michelle Bachelet called for a moratorium on the use of several systems using artificial intelligence algorithms. This was reported by Agence France-Presse.

“The use of artificial intelligence-based technologies can have negative and even catastrophic consequences if they are used without proper understanding of how they affect human rights,” the agency quotes Bachelet as saying. It is reported that we are mainly talking about the facial recognition system and related technologies.

Open AI

At the end of 2023, OpenAI (the developer of ChatGPT) announced the creation of a strategy to proactively address the potential dangers of AI. Particular attention is paid to preventing risks associated with technological developments.

The Readiness Team will play a key role in this mission. It will be led by MIT professor of AI Alexander Madri. The team will also include AI researchers, computer scientists, national security experts and policy experts who will work together to monitor AI developments and alert OpenAI to potential threats.

This group will work alongside the teams:

  • security systems, which addresses existing problems such as preventing racial bias in AI;

  • Superalignment, which studies the workings of strong AI and how it will work once it surpasses human intelligence.

OpenAI's security framework also includes risk assessment across four main categories: cyber, nuclear, chemical, biological, persuasion, and model autonomy. Each category is assigned a rating based on specific criteria on a scale from “low” to “critical.” This model is intended to be used before and after security measures are implemented.

Open AI also launched a separate section on its website dedicated to AI safety.

European Union

In the spring of 2023, the European Parliament tentatively agreed on a law called the AI Act, which sets rules and requirements for developers of artificial intelligence models. The document aims to ensure the safety, transparency, environmental friendliness and ethical use of AI in Europe.

The rules establish a risk-based approach to AI and outline obligations for AI developers and users depending on the level of risk involved in the AI.

There are four categories of AI systems: minimal, limited, high and unacceptable risk.

  • Most tools fall into the category of services with “minimal” risk if the results of their work are predictable and cannot harm users in any way;

  • Neural networks such as ChatGPT and Midjourney will fall under the “limited” risk category. Their algorithms will have to pass security checks to gain access to the EU;

  • The “high” risk category includes specialized AI systems, for example, those used in medicine, education, transport (unmanned vehicles), and so on;

  • The category of “unacceptable” risk is, for example, algorithms for setting social ratings or creating deepfakes, systems that use subconscious or targeted manipulative methods that exploit people’s vulnerabilities.

AI systems with an unacceptable level of security risk will be strictly prohibited. And suppliers and developers of “high-risk” AI are required to:

  • conduct risk and compliance assessments;

  • register your systems in the European AI database;

  • ensure high quality data used to train AI;

  • ensure transparency of the system and awareness of users that they are interacting with AI, as well as mandatory human supervision and the ability to interfere with the operation of the system.

Also, companies that develop models based on generative AI (which can create something new based on an algorithm) will have to write technical documentation, comply with EU copyright law and describe in detail the content used for training. The most advanced baseline models that pose "systemic risks" will be subject to additional scrutiny, including assessment and mitigation of those risks, reporting of serious incidents, implementation of cybersecurity measures and energy efficiency reporting. And, of course, developers are obliged to inform users that they are interacting with AI, and not with a person.

Parliament also clarified the list of practices prohibited for artificial intelligence. It included:

  • remote biometric identification, including in public places in real time, with an exception for law enforcement agencies and only after judicial permission;

  • biometric categorization using sensitive characteristics (eg, gender, race, ethnicity, nationality, religion, political position);

  • predictive policing systems (based on profiling, location or past criminal behavior);

  • emotion recognition in law enforcement, border control, workplaces and educational institutions;

  • indiscriminate extraction of biometric data from social networks or CCTV footage to create facial recognition databases (violation of human rights and privacy rights).

In December 2023, this law was finally agreed upon .

USA

In October 2023, US President Joe Biden issued a decree . Developers of the world's most powerful AI systems must now share safety testing results and other critical information with the US government. The decree also provides for the development of standards, tools and tests that are designed to help ensure the safety of AI systems.

In addition, a cutting-edge program should be created to develop AI tools aimed at finding and eliminating vulnerabilities in mission-critical software.

China

Negotiations between OpenAI, Anthropic and Cohere with Chinese experts

American artificial intelligence companies OpenAI, Anthropic and Cohere have held secret diplomatic talks with Chinese AI experts, the Financial Times reports . They said two meetings were held in Geneva in July and October 2023, attended by scientists and policy experts from US AI groups, as well as representatives from Tsinghua University and other Chinese government agencies.

The talks took place amid general concerns about the spread of misinformation by artificial intelligence and a possible threat to society, the newspaper adds. The meetings allowed both sides to discuss the risks posed by emerging technologies and spur investment in AI safety research, sources said. They added that the ultimate goal is to find a scientific way to safely develop more complex technologies in this area. In addition, the parties discussed possible cooperation in the field of AI and more specific policy proposals, the sources said.

“There is no way we can set international AI safety standards without agreement among this set of participants,” said one of those present at the talks. “And if they agree, it will be much easier to involve others in this,” the FT source reports.

According to one of the negotiators, the Geneva meetings were organized with the knowledge of the White House, as well as British and Chinese officials. “China supports efforts to discuss AI governance and develop the necessary norms and standards based on broad consensus,” the Chinese Embassy in the UK said, the FT reports. The newspaper also clarifies that the negotiations were organized by The Shaikh Group, a private mediation organization that promotes dialogue in conflict regions, particularly in the Middle East.

Negotiations between Chinese President Xi Jinping and US President Joe Biden

On November 15, 2023, during the Asia-Pacific Economic Cooperation (APEC) summit in San Francisco, the leaders of the two countries agreed to cooperate in several significant areas. This was reported by the Chinese news agency Xinhua .

Thus, the American and Chinese leaders came to the conclusion that their technologically developed countries should come to interaction in the development and study of artificial intelligence.

Rules for regulating AI

During 2023, Chinese authorities, together with businesses, have developed 24 new rules to regulate AI . The rules were introduced on August 15, 2023.

The goal is not to hinder the development of artificial intelligence technologies, since this industry is extremely important for the country. It is necessary to find a balance between supporting the industry and the possible consequences of the development of AI technologies and related products.

If you break down the main goal into key tasks, then this is:

  • promoting the healthy development of generative AI and its standard applications;

  • protection of national security and public interests;

  • protection of the rights of citizens and legal entities.

The rules themselves look like this.

1. When developing, using and managing AI, it is necessary to observe the basic socialist values and not undermine national security, public order and the legitimate rights and interests of citizens, legal entities and other organizations.

2. When developing, using and managing AI, it is necessary to respect and protect personal information, personal confidentiality and personal rights, interests of users, as well as to prevent and deter illegal collection, storage, processing, use, provision, disclosure and leakage of personal information.

3. When developing, using and managing AI, it is necessary to observe the principles of fairness, openness and transparency, ensure the objectivity and fairness of AI processes and results, as well as prevent and contain discrimination and inequality related to AI.

4. When developing, using and managing AI, it is necessary to comply with the principles of safety and reliability, ensure the safety and controllability of AI, as well as prevent and contain risks and threats associated with AI.

5. When developing, using and managing AI, it is necessary to comply with the principles of environmental friendliness and energy efficiency, ensure environmental safety and energy conservation of AI, as well as prevent and contain environmental problems and waste of resources associated with AI.

6. When developing, using and managing AI, it is necessary to observe the principles of joint development and the common good, ensure balanced and coordinated development of AI and the economy, society, nature and man, as well as prevent and contain the negative effects of AI on economic and social stability and harmony.

7. When developing, using and managing AI, it is necessary to observe the principles of international cooperation and exchange, ensure the openness and integration of AI, and prevent and deter isolation and confrontation associated with AI.

8. When developing, using and managing AI, it is necessary to comply with the principles of legality and responsibility, ensure that AI complies with laws and regulations, as well as identify and be responsible for violations related to AI.

9. In the development, use and management of AI, it is necessary to establish and improve AI regulatory mechanisms, ensure effective supervision and management of AI, and prevent and contain the chaos and disorder associated with AI.

10. When developing, using and managing AI, it is necessary to respect and protect intellectual property, ensure the legitimate use and dissemination of AI, and prevent and deter violations and abuses related to AI.

11. When developing, using and managing AI, it is necessary to take into account ethical and moral norms, ensure that AI conforms to human dignity and values, as well as prevent and deter violations and insults related to AI.

12. When developing, using and managing AI, it is necessary to take into account cultural and religious differences, ensure respect and tolerance of AI for the diversity of cultures and beliefs, as well as prevent and contain conflicts and contradictions related to AI.

13. When developing, using and managing AI, it is necessary to take into account the psychological and physiological characteristics of a person, ensure the humanization and adaptation of AI to human needs and interests, as well as prevent and contain harm and damage associated with AI.

14. When developing, using and managing AI, it is necessary to take into account the social and economic consequences, ensure the rational and effective use of AI to improve the quality of life and human well-being, as well as prevent and contain the negative effects of AI on employment, income and wealth distribution.

15. When developing, using and managing AI, it is necessary to take into account the political and legal consequences, ensure the democratic and legitimate participation and control of AI in political and legal life, as well as prevent and contain the negative effects of AI on the political and legal system and order.

16. When developing, using and managing AI, it is necessary to take into account scientific and technological consequences, ensure scientific and technological development and innovation of AI in accordance with the laws and regulations of science and technology, as well as prevent and contain the negative effects of AI on scientific and technological safety and ethics.

17. When developing, using and managing AI, it is necessary to take into account international and regional implications, ensure the peaceful and cooperative use of AI to strengthen friendship and cooperation between countries and regions, as well as prevent and contain the negative effects of AI on international and regional security and stability.

18. When developing, using and managing AI, it is necessary to take into account global and planetary impacts, ensure the ecological and sustainable use of AI to protect the environment and biodiversity, as well as prevent and contain the negative effects of AI on global and planetary climate and ecosystems.

19. When developing, using and managing AI, it is necessary to take into account human and civilizational consequences, ensure harmonious and synergetic interaction of AI with human history, culture and civilization, as well as prevent and contain the negative effects of AI on human and civilizational identity and values.

20. When developing, using and managing AI, it is necessary to take into account future and uncertain consequences, to ensure the prediction and prevention of AI in order to adapt to changing conditions and scenarios, as well as to prevent and contain the negative effects of AI on the future and uncertain development of humanity and the world.

21. When developing, using and managing AI, it is necessary to take into account the evolutionary and transformational consequences, ensure the reasonable and careful use of AI to improve and expand human capabilities and potential, as well as prevent and contain the negative effects of AI on the evolution and transformation of the human race and existence.

22. When developing, using and managing AI, it is necessary to take into account existential and catastrophic consequences, ensure the safe and responsible use of AI to prevent and minimize AI risks and threats to the survival and development of humanity and the world, as well as prevent and contain the negative effects of AI on the existence and catastrophe of humanity and the world.

23. When developing, using and managing AI, it is necessary to take into account the philosophical and spiritual consequences, ensure philosophical and spiritual understanding and enrichment of AI in order to comprehend and overcome fundamental issues and problems related to AI, as well as prevent and contain the negative effects of AI on the philosophy and spirituality of humanity and the world.

24. When developing, using and managing AI, it is necessary to take into account the cosmic and multiversal consequences, to ensure the cosmic and multiversal use and research of AI to expand and deepen the knowledge and interaction of mankind and the world with space and the multiverse, as well as to prevent and contain the negative effects of AI on space and multiversal security and harmony.

And, as in Europe, it is necessary to register services and algorithms based on artificial intelligence, and the content generated by algorithms (including photos and videos) must be marked.

Also, creators of content and algorithms will have to check the safety of their products before they are put on the market. How this will be implemented is still uncertain.

Plus, companies related to AI technologies and generated content should have a transparent and effective mechanism for dealing with user complaints about services and content.

At the same time, there will be seven regulators themselves, including the Cyberspace Administration of China (CAC), the Ministry of Education, the Ministry of Science and Technology, and the National Development and Reform Commission.

You can download the rules link.

Bletchley Declaration

In November 2023, 28 countries participating in the First International Summit on Safe AI, including the United States, China and the European Union, signed an agreement known as the Bletchley Declaration.

This agreement calls for international cooperation to address the challenges and risks associated with artificial intelligence. The focus has been on regulating “Edge AI,” a term that refers to the latest and most powerful artificial intelligence systems. Concerns that were raised at the summit include the potential use of artificial intelligence for terrorism, criminal activity and warfare, as well as the existential risk posed to humanity as a whole .

Russia

At the moment, Russia is on the list of lagging behind and there are no clear and transparent regulatory mechanisms in the country. As of early 2024, the key documents in the field of AI were:

  • Decree of the President of the Russian Federation of October 10, 2019 N 490 “On the development of artificial intelligence in the Russian Federation”, which introduces the National Strategy for the Development of Artificial Intelligence for the period until 2030

  • Federal Law of April 24, 2020 N 123-FZ "On conducting an experiment to establish special regulation in order to create the necessary conditions for the development and implementation of artificial intelligence technologies in a constituent entity of the Russian Federation - the federal city of Moscow and amending Articles 6 and 10 of the Federal Law" About personal data"

  • Order of the Government of the Russian Federation dated August 19, 2020 No. 2129-r <On approval of the Concept for the development of regulation of relations in the field of artificial intelligence and robotics technologies until 2024>

  • A number of state standards by industry. The list is available here.

The development of AI is also supported through the introduction of experimental legal regimes (ELR):

  • Federal Law No. 258-FZ of July 31, 2020 “On experimental legal regimes in the field of digital innovation in the Russian Federation”,

  • Federal Law of July 2, 2021 No. 331-FZ “On amendments to certain legislative acts of the Russian Federation in connection with the adoption of the Federal Law “On experimental legal regimes in the field of digital innovation in the Russian Federation””

Among the active initiatives, the proposal of the Central Bank of the Russian Federation should be noted:

Thus, the Central Bank intends to support the risk-based principle of regulating the development of artificial intelligence in the financial market of the Russian Federation. He identified two new legislative areas: liability for harm caused by the use of technology and copyright protection.

The Central Bank also analyzed the regulation of the use of AI in the world and identified three models of possible regulation in the field of finance.

The first model is a restrictive approach, that is, the legislation directly prohibits the use of certain AI systems (EU and Brazil).

The second model is a hybrid approach, that is, a combination of instruments of strict regulation, soft regulation and self-regulation based on risk-based principles (China, Canada, USA).

The third model is an incentive approach using soft regulation tools (self-regulation, ethical principles) and the complete absence of restrictive measures regarding AI (UK and Singapore).

The Central Bank itself considers it appropriate to support the creation of conditions aimed at stimulating the development of AI in the financial market, taking into account the risk-based principle of regulation. That is, the Central Bank currently does not see the need for the prompt development of separate regulation of the use of technology by financial organizations, but does not exclude the introduction of special requirements in individual cases, after consultations with market participants.

For example, the bill ( No. 992331-7 ) establishes the procedure for depersonalizing personal data for operators who are not government agencies.

The adoption of another bill ( No. 404786-8 ) makes it possible for financial organizations, when outsourcing functions related to the use of cloud services, to transfer information constituting bank secrecy for processing. “The issue is especially relevant for training AI models that require significant information and hardware resources. The use of cloud solutions can reduce the costs of market participants,” the Central Bank report says.

The regulator has identified two areas that require legislative regulation:

  • it is necessary to formulate approaches to the distribution of responsibility between the developer of AI technology and the user organization for harm caused as a result of the use of AI

  • it is necessary to determine the legal regime of objects created by AI, including from the standpoint of intellectual property legislation. The Central Bank emphasizes that this is especially true for generative AI models - that is, models that create text, images and a variety of content based on the data on which training is performed.

“The study of these issues will create a transparent legal environment for AI developers, which will give impetus to the creation and implementation of new solutions based on AI,” the Central Bank’s report says.

The key resource for monitoring regulation in the field of AI is the portal Artificial Intelligence in the Russian Federation . In particular, the Regulatory section .

Thus, according to data on the portal, at the end of 2023 the following instructions of the President are relevant:

  • establish mandatory requirements to increase the efficiency of business entities and their mandatory use of modern technologies, including AI technologies (if such entities are provided with subsidies from the federal budget)

  • make changes to the educational programs of universities that will increase the level of competencies in the field of AI of specialists from key sectors of the economy and social sphere, specialists in state and municipal management

  • make changes to national projects and government programs that provide for the introduction of artificial intelligence technologies in every industry.

  • adjust strategies for digital transformation of economic sectors

  • monitor the results of using AI technologies

  • ensure the participation of the “Federal Center of Competence in the Sphere of Labor Productivity” in the implementation of AI technologies and modern management systems in sectors of the economy, social sphere and government bodies

  • approve a federal project for the development of domestic robotics: determine legal, tax and other conditions relating to the development of production in this area, government support measures, target parameters for the development of production and the introduction of industrial robots

  • extend support for the activities of research centers in the field of AI until 2030

  • include measures for the implementation of AI technologies as a priority task in the investment development programs of companies with state participation

  • The Government of the Russian Federation, together with the Alliance in the Field of Artificial Intelligence, will ensure:

  • The Government of the Russian Federation, together with the commissions of the State Council of the Russian Federation, the Federal Competence Center in the Field of Labor Productivity and the Alliance in the Field of Artificial Intelligence, will ensure the transition of the system of government at the federal and regional levels to a management model based on automatic data collection and analysis using information platforms

  • The Administration of the President of the Russian Federation, together with the Government and the Alliance in the Field of Artificial Intelligence, will prepare a draft presidential decree on introducing changes to the National Strategy for the Development of Artificial Intelligence for the period until 2030, aimed, among other things, at the widespread introduction of AI

  • The Ministry of Health of Russia, the Ministry of Economic Development of Russia and the Ministry of Digital Development of Russia will ensure:

  • Commissions of the State Council of the Russian Federation to introduce the most successful practices of using AI in the constituent entities of the Russian Federation

  • The Federal Tax Service of Russia, together with the Ministry of Economic Development and the Ministry of Digital Development, take measures aimed at ensuring the effective application of the tax incentive mechanism for entrepreneurs who have introduced advanced domestic information and telecommunication technologies

  • recommend to the Association "Alliance in Artificial Intelligence" to submit proposals for additional measures to support projects and specialists in the field of AI, to provide domestic software developers with access to databases for the development of AI programs, to advise organizations

  • The State Corporation "Rosatom" together with "Russian Railways" and with the participation of the Ministry of Digital Development of Russia and PJSC "Sberbank of Russia" and leading research universities in the field of artificial intelligence to hold conferences on the use of new computing and data transmission technologies

  • The Russian Ministry of Industry and Trade will hold conferences on the application of new industrial technologies starting in 2023

  • Recommend that the State Duma consider a draft federal law establishing the procedure for depersonalizing personal data

What should we expect in the future?

  • International controls and regulations

Without a doubt, international bodies will be created that will formulate key restrictions and create classifications for AI solutions.

  • National authorities and regulations

States will create their own models for regulating AI. In general, we agree with the conclusions of the Central Bank of the Russian Federation, but we believe that there will be more approaches. If you combine them, then most likely the following will happen:

  • identified industries or types of AI solutions that will be prohibited;

  • for high-risk solutions or industries, licensing and security testing rules will be created, including restrictions on the capabilities of AI solutions;

  • common registries will be introduced, which will include all AI solutions;

  • areas of development that will be supported and for which technological and legal sandboxes will be created will be determined;

  • Special attention will be paid to working with personal data and compliance with copyright.

Most likely, the use of AI in law, advertising, nuclear energy, and logistics will be subject to the greatest restrictions.

Separate regulatory agencies or committees will also be created. national control authorities. If you look at the example of China, there is a high risk that interaction between various departments will be difficult to organize. You should expect something like an Agile department, with representatives from different industries and specializations. It will develop rules and control the industry.

  • Licensing

Licensing will likely be based on industries or application areas and capabilities of AI solutions.

This is the most obvious way to classify and assess risks. Developers will be forced to document all features. For example, can the system only prepare recommendations, or is it capable of issuing control commands for equipment.

Developers will also be forced to maintain detailed documentation on the system architecture of solutions, the types of neural networks used in solutions and hardware.

  • Requirements for control / examination of data on which AI will be trained

A key direction in AI is pre-trained and optimized models. Actually, the abbreviation GPT in chatGPT means this. And here we already see the key requirements - fixing what initial data the AI is trained on. It will be something like a metadata registry/sources/data catalogs.

The key thing that will be possible is to control/record the feedback on which the AI learns. That is, it will be necessary to record all the logs and describe the mechanics of collecting feedback. One of the key requirements will be to minimize the human factor, that is, to automate feedback.

For example, at Digital Advisor we plan to collect feedback on the project not only from participants, but also based on a comparison of plan/fact from accounting systems.

  • Risk-based approach: use of sandboxes and provocative testing

Particular attention will be paid to safety testing. Here we see the most likely model for obtaining safety ratings for cars. That is, AI will undergo approximately the same thing as is currently happening in car crash tests.

That is, for each class of AI solutions (according to capabilities, area of application), unacceptable events will be determined and testing protocols will be generated that the AI solutions must pass. AI solutions will then be placed in an isolated environment and tested against these protocols. For example, technical hacking of algorithms, resistance to provocations of incorrect behavior (data substitution, generation of queries, etc.).

These protocols have yet to be developed. It is also necessary to develop criteria for compliance with these protocols.

Next, a safety rating or compliance with the required class will be assigned.

It is possible that open programs and cyber battles will be created to find vulnerabilities in industrial and banking software. That is, an expansion of the current bug bounty and cyber battle programs.

  • Labels and warnings

All content, including all recommendations and products based on AI, will be required to be labeled as the product of AI based on neural networks. Much like the warning pictures and inscriptions on cigarette packages.

This will also help resolve the issue of liability. That is, the user will be warned about the risk, and the responsibility will already be on the user himself. For example, even a completely unmanned car will not relieve the driver of responsibility for an accident.

  • Risk-based approach: inhibition of the development of strong and super-strong AI, flourishing of local models

Based on the pace of AI development and an analysis of what is happening in the world in the field of AI regulation, we clearly understand that a risk-based approach to AI regulation will develop in the world.

And the development of a risk-oriented approach will in any case lead to the fact that strong and super-strong A.I. will be considered the most dangerous. Accordingly, there will be the most restrictions for large and powerful models. Every step of the developers will be controlled. This will cause the costs and challenges for development and implementation to increase exponentially. As a result, we have problems for both developers and users, which reduces the economic potential of such solutions.

At the same time, specialized models based on local and cut-down AI models that can do little will be in the area with the least regulation. Well, if these AIs are still based on international/national/industry methodologies and standards , then there will be not restrictions, but subsidies.

As a result, the combination of such “weak” and limited solutions based on AI designers in combination with an AI orchestrator will make it possible to bypass restrictions and solve business problems. Perhaps the bottleneck here will be AI orchestrators. They will fall under the medium risk category and will most likely have to be registered.

Summary

The period of active uncontrolled development of AI will come to an end. Yes, strong AI is still far away (according to various estimates, 5-20 years), but even “weak” or “borderline” AI will begin to be regulated. And that's okay.

In our practice, we really value a tool called Adizes theory. So, at an early stage you need to focus on P and E functions, that is, a lot of manufacturer and look for a client. But the further you go, the more functions A (regulation, rules) and I (the ability to interact with other people) are needed.

Also here. Technology has passed its first steps when it needed maximum freedom, now they are starting to think about risks and security. Everything is the same as in the life cycle of an organization - from a startup with a minimum of rules (the main thing is to find a client and survive), to a mature corporation (careful work with risks, regulations, rules, etc.).

The key is that the development of AI cannot be stopped or the attitude of “we will wait” be taken. Firstly, this is a highly intelligent industry, and it is necessary to train personnel and develop competencies (then there will be no such resources to lure them away). Secondly, in order to effectively regulate something, you need to be on topic and deeply understand the specifics. It’s the same with information security. An attempt to ban everything leads to paralysis of organizations and the fact that people begin to look for other channels of communication, ultimately increasing risks. Therefore, now in information security there is a development of a risk-oriented approach, through the identification of unacceptable events, scenarios / processes that can lead to them, and criteria that must work for these events to occur.

Therefore, AI will be developed, but in isolated environments. Only by being “in the know” can you develop, make a profit, and build guaranteed security.

Useful materials

Comentários


bottom of page