|The ethics of the use of artificial intelligence is a close area.|
Burlutsky to discuss the implications of introducing artificial intelligence (AI), its possible scope and myths around this phenomenon.
If we talk about the implementation of artificial intelligence, what is the timeframe is - is it yesterday, now or vice versa, the distant future?
The first thing to note is that artificial intelligence does not exist. Instead, it refers to AI methods, such as deep learning and machine learning. Regarding the terms, I always use the following thesis: today, using AI everywhere, in all spheres of human activity, it is too early, but it's too late in some areas to not use it at all.
The reason is straightforward - for this, there were opportunities. Even a new religion, database, has already been founded, whose followers believe that the thinking and decision-making functions of a person who can not cope with the enormous amount of information can now be transmitted to the AI because he can process data faster and better.
Each year, the research company Gartner presents a report Hype Cycle, which details in detail at which stage of development is those or other technologies. There is an origin, evolution, peak, decline, and output of a particular product. It usually follows disappointment, refinement, and, finally, heavy use. So, according to the Gartner report, several technologies go to the performance stage: 5G connectivity, virtual assistants, and neural networks.
The plan of revenues of these technologies for large-scale production and application - 2-3 years from the date of publication of the report in 2018. So, let's talk about 2020-2022 when we come to practical use or testing of these technologies by almost all companies. This is because these technologies become affordable, and above all, given the cost of the product. Then we get a lot of information, experience and we can draw conclusions about the feasibility of introducing these technologies, the benefits for business.
I recently watched the "Billiards" series. There, in one of the episodes, the head of the department of exchange analysts suggests instead of people implement AI for better and faster analysis of market data. The series clearly shows the chaos that began after this, because all the workers in a panic ran to the psychologists, complained that they are about to be replaced by robots. But this is one of the myths. There are different estimates from different companies regarding what percentage of human activity can be replaced by AI or automated.
And the grades range from 9% to 57%. This is normal, it's going to happen now and will continue to grow. The AI algorithms will handle routine tasks faster and better. AI will be more accurate compared to the human eye, it will be faster, it will more quickly find specific patterns in the data and will suggest possible solutions. But the final decision at this stage will still be accepted by a person. It is still not possible to use WI everywhere, but it is merely necessary to apply it to improve human decisions.
Also, nowadays in western companies, and in our Ukrainian, too, workers are increasingly complaining about the growth of the load. People cover a wave of operational tasks, such as filling out forms, deleting, or entering data. If this small operational work can be automated, to instruct AI, employees will have time and energy to think and search for non-standard solutions within their functional responsibilities. They will have more time for creativity, someone will finally be able to think about the future of their business. The implementation of AI will also help minimize mistakes, the so-called human factor.
It should be borne in mind that total and full automation began in the mid-1980s of the last century. However, for today, as far as I know, only the elevator's profession was fully automated. Will we get to the level when people are replaced at other workplaces?
So, it'll happen once. First of all, as I have noted, this will apply to individuals whose duties include daily routine operations. For example, a driver's profession will soon become the object of automation.
But it's not about all drivers. One of them must stay and engage in "training" unmanned cars, how to behave on the road. That is, such drivers will turn into teachers and trainers. There will be a refreshing change when these people receive new tasks, new functions from "communication" with robotic systems of autopilot. Drivers, by their example, experience, and behavior, will learn the AI because it trains on data, like a child: sees and repeats.
The introduction of new technologies, such as neural networks, or virtual assistants, leads to the emergence of new professions or to a specific upgrade of existing ones.
To manage an unmanned drone in an agro sector, you need from nine to twenty experts. This is a programmer, a pilot, a person loading a flight task that decrypts the data received, techniques ... In China, the "pilot drone," "complementary designer" or "coach AI" - these are already new official professions, they are made in the relevant state registries And this must be done, and consequently, to undergo appropriate training or retraining.
Teachers from schools or colleges will be able to become mentors for AI because they have the necessary pedagogical skills.
How does robotics affect the economy and the labor market? Where does work replace people?
- If you imagine an AI as a mechanism, a robot, then it can really, for example, rearrange something in stock. Round the clock, without weekends, sick leave and vacations. But human power is still cheaper than work.
And only a person can use for decision-making not only analytical factors and figures but also some external features that cannot be digitized. Intuition in the AI will not last for a very long time, modern works cannot reach the level of the heroes of fantastic films. In 2016, a Future of Jobs study was presented at the Davos World Economic Forum, which analyzed the critical skills needed by workers in the future (then it was about 2020). Today, you can check how accurately the forecast was made.
The primary skills were called the ability to solve complex problems, critical thinking, and creativity. They become meaningful when a person frees himself through automation for a specific time, and thanks to the use of AI, there is an entirely new analytical information. In this case, the task of the person remains extremely important - to make decisions and interpret the findings. This is something that so far, cars do not know how.
For example, you can create an algorithm that analyzes and counts sales for a retail point in a way that sells all inventories without balances. In this case, the AI can tell where the critical link appeared in the chain, but decide who to do with it, the person must. Although there are already examples of other behavior.
The Renault team, in its Formula 1 racing cars, uses a complex sensor system, the Internet of things and machine learning to predict the failure of individual units and units. At the same time, data is automatically transmitted to the manufacturer, so that a replacement is made in advance and sent to the right place.
In your opinion, the basic income for workers who are replacing jobs is a panacea or a utopia?
- All talk of the dangerous consequences of the mass implementation of the AI is unreasonable but much exaggerated. And total replacement of workers, and intercepting the functions of making vital decisions - all this is real but hardly so fatal. Adaptation to technological progress in a society already has centuries of experience.
We will also cope with these challenges from the AI. But at the same time, there is a problem. The weakness of the system of post-industrial society, which has become too complicated lately, is not so vulnerable to AI threats as the community's response to the risks of spreading it.
The theme of basic income is not utopian. After all, it will be necessary somehow to solve the problem of changes in the labor market. Perhaps not today, however, after 30 years, a significant proportion of the work will be automated, people who will need to do something and live on some kind of money will be released.
Taking into account that the use of AI promotes the receipt of additional income by companies, we can assert that from the standpoint of the economy, such an exit exists. That is, states or companies using AI will be able to allocate funds to people who were released as a result of total automation.
Tax on robotics, if simplified?
- So. At the same time, the state should establish some control bodies that will determine the degree of automation and estimate additional profit, so that based on these data, the amount of contributions to a fund similar to a pension is calculated. In Ukraine, there are no candidates for such a body yet.
However, the universal base income will not be able to solve these problems stably. The future, in which 99% live at the subsistence level through the use of robotic labor, and 1% of the elite (the upper class that controls robots) builds its empires and earns unprecedented wealth - is shaky and explosive, and therefore will not last long.
What can be entrusted to robots, and what should people leave? Where do we not have the right to apply algorithms?
- With excellent opportunities comes great responsibility. Therefore, an aspect such as the ethics of the use of AI is a new severe area, which is still under discussion in Ukraine so far. However, the ethical aspect will play a key role, since the one who creates the code, program, or robot algorithm and puts in its specific characteristics, including behavior. Let's try to imagine the work that makes a medical operation (and they really exist).
Depending on the principles laid down by the program, we get the output at the output. If it is very simplistic - a robot can operate at some point damage if it is not taken into account in the program. About stock trading, then the algorithm may appear such elements as speculation, receiving insider information, corruption. And this will be a violation of the law, business principles and, of course, ethical standards.
However, he will better fulfill his primary function - will earn more money in comparison with the "honest" algorithm.
Yes, but think about what corruption causes: it destroys the state and creates chaos. In this case, we must solve two problems simultaneously: to overcome the consequences of automation and to combat unfair work. However, AI is not necessarily evil. If the algorithm is configured correctly, then it will be better than a human to work in those areas where it deals with processing large amounts of data, such as CRM, ERP and BigData, modeling and forecasting.
And then the factories will not produce surplus goods, repositories will not be overcrowded, the minimum amount of transport needed will be the optimal amount of products in shops where the products will be purchased, and there will be no need for utilization. This, in the end, will also affect the environment. That is, the application of AI can have a positive effect on global problems. For example, Microsoft has a separate AI for Earth application that addresses global human issues with AI.
Another field is medicine. There is still a question about the effectiveness of AI, the results are very volatile. After all, a person may notice some additional factors and accept the correct diagnosis. Therefore, the feasibility of using robots in medicine is perceived ambiguously.
However, with the help of complemented reality, doctors can be trained before a real operation, working out all possible variants and complications. Therefore, the application of AI should be approached carefully.
In the military sphere, the two superpowers, the People's Republic of China and the US, are running a crazy race for the championship in the development of AI.
It's about using his technology for espionage, cyberattack, and crypto-protection. In China, the highest level of implementation of the AI implies all spheres of activity, including military.
And in this Beijing is far ahead of Washington. China is expanding on an unprecedented scale a broad surveillance network that uses the achievements of AI and machine learning, to eliminate domestic political dissent, and to optimize the political control of the Communist Party. Speaking of figures, Beijing invests more than $ 6 billion in AI development, while the US spends three times less. And the nation's national strategy for AI was presented only at the beginning of this year.
Is it necessary to beware of the "uprising of terminators," since this is one of the main arguments against the AI opponents?
Of course, a lot of talk about the worst scenario - the war of robots, like those already produced by Boston Dynamics.
However, nowadays, this is unlikely due to the limited capabilities of AI. As long as the algorithms that a person creates, they will not reach the level at which they can decide on their own actions, we have nothing to fear.
Scientist Boris Katz, who has joined the development of virtual assistant, is convinced that the modern approach to AI is not able to make Siri or Alexa truly intelligent.
We have to go a different way: at first, correctly understand the principles of human intelligence, and then use it to create smart machines.
There is, however, another point of view: the emergence of new burglary tools may allow intervention in the management of robot units, which may already lead to negative consequences. We become vulnerable.
And so the security issue is no less relevant than ethics. Systems that employ AI algorithms should be safe and secure, and access to objects controlled by these systems should be limited. This applies not only to military purposes with military operations but also to civilians belonging to critical infrastructure.
Fortunately, we have not yet seen examples of break-in and interception of combat robot control.
However, there are already examples where an unmanned car killed a pedestrian. And this issue needs to be resolved. No wonder after the first such incident a meeting in the United Nations was going to decide how to react. Necessary detailed consultation and discussion of lawyers to work out common recommendations for all states.
Will computers be able to conquer humanity?
In theory, this is entirely possible. In my opinion, technology, robotics, algorithms develop much faster than a person in the last two thousand years of his history. The pace is just colossal. It is difficult to determine the extent to which work can be done, but they will definitely be able to be equal to a person.
Today, artificial intelligence lacks intuition, ability to associate. But we should take into account the pace and latest technology, such as quantum computing, which in the future will significantly expand its capabilities.
Also, some organizations work precisely on the development of such algorithms and trying to give the machine the properties of real intelligence, but still far from being successful. Incidentally, I like the work of science fiction writers, because most of what they wrote a few decades ago, in one form or another, is becoming a reality today.
Among the most serious and most dangerous prospects, I see not at all a work-apocalypse or an uprising of machines, but the gradual degradation of people's ability to altruism, love, friendship, and cooperation caused by the spread of SI-centaurs (sometimes called the hybrid interaction of a real person with artificial intelligence machines).