top of page

Generative artificial intelligence - sunset, new winter?

At the start of 2023, there was a real boom in generative artificial intelligence (GII). But is everything so rosy? Or was this a one-time outbreak and another AI winter awaits us? Or maybe we are on the verge of an AI transformation? Let's figure it out.

Content

What is generative artificial intelligence?

In the article Artificial Intelligence: Assistant or Toy? We reviewed the key areas for the use of AI:

  • forecasting and decision making;

  • analysis of complex data without clear relationships, including for forecasting;

  • process optimization;

  • pattern recognition, including images and voice recordings;

  • content generation.

The areas of AI that are now at the peak of popularity are image recognition (audio, video, numbers) and content generation based on them: audio, text, code, video, images, and so on. Corporate digital advisors can also be classified as generative AI.

And this is where most AI developers go, and the most promoted projects are generative AI:

  • Midjorney;

  • Chat GPT;

  • Bard;

  • Kandinsky;

  • Giga chat.

A good selection of solutions is also available here .

The challenges of generative AI

As of the end of 2023, the direction of generative AI cannot be called successful. For example, in 2022, OpenAI suffered losses of $540 million due to the development of ChatGPT. And for further development and creation of strong AI, about another 100 billion dollars will be required. This amount was announced by the head of OpenAI himself.

The American company CCS Insight also gives an unfavorable forecast for 2024. Returning again to Open AI, we need to note its operating costs, which amount to $700,000 per day to maintain the functionality of the ChatGPT chatbot.

Alexey Vodyasov , technical director of SEQ , also has an interesting opinion: “AI does not achieve the marketing results that were discussed earlier. Their use is limited by the training model, and the cost and volume of training data is increasing. In general, hype and boom are inevitably followed by a decline in interest. AI will leave the spotlight as quickly as it entered, and this is just the normal course of the process. Not everyone may survive the downturn, but AI is truly a “rich man’s toy” and will remain so for the near future.” And we agree with Alexey, after the hype at the beginning of 2023, there was a lull by the fall.

A Wall Street Journal investigation completes the picture. Thus, according to him, most IT giants have not yet learned to make money from the capabilities of generative neural networks. Microsoft, Google, Adobe and other technology companies that are actively investing in artificial intelligence are looking for ways to make money on their products. Let's take a few examples:

  • Google plans to increase subscription prices for AI-enabled software;

  • Adobe sets limits on the number of times you can access AI services in a month;

  • Microsoft wants to charge business customers an extra $30 a month for the ability to create presentations using a neural network;

Also indicating problems is the fact that the same Zoom is trying to reduce costs by using a simpler chatbot, developed in-house and requiring less computing power compared to the latest version of ChatGPT.

Computing power is indeed one of the main costs when working with generative AI technology. The more users send chat requests, the higher the infrastructure bills. The only winners in this situation are suppliers of hardware and electricity - for example, the American corporation Nvidia started operating in August 2023 about $5 billion from sales of its A100 and H100 high-performance computing accelerators to the Chinese IT sector.

But let's look at everything in order - why this happened and what restrictions threaten AI, and most importantly, what will happen next:? Decline of generative AI with another AI winter or transformation?

AI Limitations That Cause Problems

So, what are the key problems with current generative AI solutions?

  • Companies worry about their data

Any business strives to protect its corporate data and tries by any means to prevent leakage of this data. And this leads to 2 problems.

First, companies prohibit the use of online tools that are located outside the perimeter of a secure network. And any request to an online bot is an appeal to the outside world. There are many questions about how data is stored, how it is protected and how it is used.

Secondly, this limits the development of any AI at all. All companies want AI-enabled IT solution providers to provide recommendations from trained models. Which, for example, will predict equipment breakdown. But they are not ready to share their data. It turns out to be a vicious circle.

However, a caveat must be made here. Some guys have already learned to place language models of the Chat GPT 3 - 3.5 level inside the company outline. But these models still need to be trained; these are not ready-made solutions. And internal security services will find risks and will be against it.

  • Complexity and high cost of development and subsequent maintenance

The development of any “general” generative AI is a huge cost—tens of millions of dollars. Besides, you need a lot of data, a lot of data. Neural networks still have low efficiency. Where a person needs 10 examples, an artificial neural network needs thousands, or even hundreds of thousands of examples. Although yes, he can find such relationships and process such arrays of data that a person has never dreamed of.

But let's get back to the topic. It is precisely because of data limitations that ChatGPT understands better if you communicate with it in English rather than in Russian. After all, the English-speaking segment of the Internet is much larger than ours.

Add to this the costs of electricity, engineers, maintenance, repair and modernization of equipment and we get the same $700,000 per day just for the maintenance of Chat GPT. How many companies can spend such amounts with unclear monetization prospects (but more on that below)?

Yes, you can reduce costs if you develop a model and then remove all the unnecessary stuff, but then it will be a very highly specialized AI.

Therefore, most solutions on the market are in fact GPT wrappers - add-ons to ChatGPT.

  • Public Concerns and Regulatory Limitations

Society is extremely concerned about the development of AI solutions. Government agencies around the world do not understand what to expect from them, how they will affect the economy and society, and how widespread the technology will be in its impact. However, its importance cannot be denied. Generative AI is making more noise than ever in 2023. They have proven that they can create new content that can be confused with human creations: texts, images, scientific papers. And it comes to the point that AI is able to develop conceptual designs for microcircuits and walking robots in a matter of seconds.

The second factor is safety. AI is actively used by attackers to attack companies and people. Thus, since the launch of ChatGPT, the number of phishing attacks has increased by 1265% . Or, for example, with the help of AI you can get a recipe for making explosives. People come up with original schemes and bypass built-in protection systems .

The third factor is opacity. Sometimes even the creators themselves don’t understand how AI works. And for such a large-scale technology, a lack of understanding of what and why AI can generate is a dangerous situation.

The fourth factor is dependence on teachers. AI models are built by people and trained by people. Yes, there are self-learning models, but highly specialized ones will also develop, and people will select the material for their training.

All this means that the industry will begin to be regulated and limited. How, no one understands yet. Let's add this to the famous letter in March 2023, in which well-known experts around the world demanded to limit the development of AI .

  • Disadvantage of the interaction model with chatbots

We think you have already tried to interact with chatbots and were, to put it mildly, disappointed. Yes. cool toy, but what to do with it?

You must understand that a chatbot is not an expert, but a system that tries to guess what you want to see or hear. And that’s exactly what it gives you.

And to get practical benefit, you yourself must be an expert in the subject area. And if you are an expert in your topic, do you need GII? And if you are not an expert, then you will not get a solution to your question and there will be no value, only general answers.

As a result, we get a vicious circle - experts don’t need it, and amateurs won’t help. Who will then pay for such an assistant? This means that at the exit we have a road toy.

In addition, in addition to expertise in the topic, you also need to know how to correctly formulate a request. And there are only a few such people. As a result, a new profession even appeared - industrial engineer. An industrial engineer is a person who understands how a machine thinks and can correctly formulate a request to it. And the cost of such an engineer in the market is about 6,000 rubles per hour. And believe me, he will not select the right request for your situation the first time.

Does business need such a tool? Will a business want to become dependent on very rare specialists, who also cost even more than programmers, because ordinary employees will not benefit from it?

So it turns out that the market for an ordinary chatbot is not just narrow, it is vanishingly small.

  • Tendency to produce low-quality content, hallucinations

In the article Artificial Intelligence: Assistant or Toy? we noted that neural networks simply collect data and do not analyze facts and their connections. That is, whatever is more on the Internet / database, that’s what they focus on. They do not evaluate what is written critically. In fact, GII easily generates false or incorrect content.

For example, experts from New York University's Tandon School of Engineering decided to test Microsoft's Copilot AI assistant from a security perspective. Ultimately, they found that about 40% of the time, the code generated by the assistant contained errors or vulnerabilities. A detailed article is available here .

Another example of using Chat GPT was given by a user on Habré . Instead of 10 minutes and a simple task, it turned out to be a quest for 2 hours.

And AI hallucinations have long been a known feature. What they are and how they arise can be read here .

We ourselves tested the GII several times and often it always gave, let’s say, not entirely correct results. And sometimes downright wrong. It was necessary to carry out 10-20 queries with absolutely insane detail in order to get something sane, which then still needs to be redone/tweaked.

That is, he needs to be double-checked. And again we come to the point that you need to be an expert in the topic in order to evaluate the correctness of the content and use it. And sometimes it takes even longer.

  • Emotions, ethics and responsibility

AGI without the right request will tend to simply reproduce information or create content without paying attention to emotion, context and tone of communication. And according to the cycle articles on communication, we already know that a breakdown in communication can happen very easily. As a result, in addition to all the problems above, we can also get a huge number of conflicts.

Questions also arise regarding the ability to determine the authorship of created content, as well as ownership of created content. Who is responsible for inaccurate or harmful actions carried out using GII? How can you prove that the authorship lies with you or your organization? There is a need to develop ethical standards and legislation governing the use of GII.

  • Economic expediency

As you and I have already understood, developing high-end generative AI ourselves can be an overwhelming task. And many will have the idea: “Why not buy a “box” and place it at your place?” But what do you think? How much will such a solution cost? How much will the developers ask for?

And most importantly, what size should the business be for it all to pay off?

What to do?

Well, let's think a little now, what should we do?

  • Do not hurry

There is no point in waiting for the decline of AI. Too much has been invested in this technology over the past 10 years, and it has too much potential.

We recommend remembering the 8th principle from the Toyota DAO, the basis of lean manufacturing and one of the tools of our systematic approach: “ Use only reliable, proven technology.”

  • Technology is designed to help people, not replace them. It is often worth doing the process manually first before introducing additional hardware.

  • New technologies are often unreliable and difficult to standardize, jeopardizing flow. Instead of using untested technology, it is better to use a known, proven process.

  • Before introducing new technology and equipment, testing should be carried out under real-life conditions.

  • Reject or change technology that conflicts with your culture and may undermine stability, reliability, or predictability.

  • Still, encourage your people to keep an eye on new technologies when it comes to finding new ways. Quickly implement proven technologies that have been tested to improve flow.

Yes, in 5-10 years generative models will become widespread and accessible, smart enough and cheaper, and eventually will reach a productivity plateau in the hype cycle. And most likely, each of us will use the results from the GII: writing articles, preparing presentations, and so on ad infinitum. But relying on AI now and laying off people would be clearly excessive.

  • Increase efficiency and safety

Almost all developers are now focused on making AI models less demanding on the quantity and quality of input data. And also on increasing the level of security - AI must generate safe content, and it must also become resistant to provocations.

  • Master AI in the format of experiments and pilot projects

To be ready for the arrival of truly useful solutions, you need to follow the development of technology, try it, and develop competencies. It’s like with digitalization, instead of jumping headlong into expensive solutions, you need to play with budget or free ones. Thanks to this, by the time the technology came to the masses:

  • you and your company will understand what requirements must be laid down for commercial and expensive solutions, and you will approach this issue consciously. A good technical specification is 50% of success;

  • you will already be able to get effects in the short term, which means you will have the motivation to go further;

  • the team will increase their digital competencies, which will remove restrictions and resistance for technical reasons ;

  • Wrong expectations will be eliminated, which means there will be less useless costs, disappointments, and conflicts.

  • Transform user communication with AI

We incorporate a similar concept into our digital advisor. The user needs ready-made forms where he can simply enter the required values or tick items. And this form with the correct binding (prompt) is given to the AI. Or deeply integrate solutions into existing IT solutions: office applications, browsers, phone answering machines, etc.

But this requires deep study and understanding of user behavior and requests. Or their standardization. That is, either this is no longer a cheap solution and still requires development costs, or we are losing flexibility.

  • Develop highly specialized models

As with people, teaching AI everything is very labor-intensive and has low efficiency. Even if we create very effective algorithms (a technological solution to the problem of efficiency), then the entire second direction is moving into the field of specialized solutions.

If you start creating highly specialized solutions based on the engines of large solutions, then training can be minimized, and the model itself will not be too large, and the content will be less abstract, more understandable, and there will be fewer hallucinations.

A clear demonstration is people. Who achieves great success and can solve complex problems? Those who know everything, or who focus on their direction and develop in depth, know various cases, communicate with other experts and spend thousands of hours analyzing their direction?

Example of a highly specialized solution

Summary

Although GII is still only at the development stage, the technology has great potential. The next step in the development of AI is the creation of newer and lighter models that require less data for training. You just need to be patient and gradually learn the tool and develop competencies in order to then use its full potential.

bottom of page