In recent years, artificial intelligence (AI) has spread rapidly in many areas of human life, shifting from the exclusive use of scientific laboratories to that of everyday life. Proof of such growth: From 2015 to 2020, total annual corporate global investment in artificial intelligence increased by $ 55 billion, according to market intelligence platform Statista. As a consequence of this development, researchers in the field consider that AI has already reached a critical turning point in its evolution, although this progress also comes with certain risks.
This conclusion was reached by a panel of experts in computer science, public policy, psychology, sociology, and other disciplines called “The Hundred-Year Study of Artificial Intelligence.” This is an ongoing project of the Stanford University Man-Centered Artificial Intelligence Institute, dedicated to monitoring the progress of artificial intelligence and guiding its development. In its second report, published in September of this year, this multidisciplinary group evaluated the progress and risks in terms of artificial intelligence, between 2016 and 2021.
Other features of AI progress are present in various scientific areas, such as in the diagnosis of cancers and other diseases. Likewise in research on the human genome, the discovery of new pharmaceutical products and in the design of driver assistance systems that will eventually be used by autonomous cars. In NotiPress we have given news of advances of this type, such as the proposal to use artificial intelligence and X-ray analysis to diagnose Covid-19, published in the IEEE / CAA Journal of Automatica Sinica. Also of the possibility of resorting to AI to understand the causes of extreme weather events, according to a study in the journal Geophysical Research Letters.
Based on the report, the current risks of AI do not predict a dystopian sci-fi scenario; these are more subtle but equally worrisome risks. These include the deliberate misuse of artificial intelligence, such as falsifying images and videos (deep fakes) to spread misinformation or damage other people’s reputations. In a similar vein, there is also the risk of bot programming to manipulate public discourse and opinion.
Various risks are related to assuming that AI is neutral and impartial, when decision-making through it may be the result of biased historical decisions or even blatant discrimination. This risk is particularly related to crime prediction systems, as there is evidence of a bias against communities of color. Similarly, this danger exists in the area of health care, where racial bias embedded in insurance algorithms can affect people’s access to appropriate care. Previously, the Artificial Intelligence National Security Commission of the United States has warned of similar risks and others, including the potentialization of cyberattacks.
Although artificial intelligence is making great strides and promising multiple benefits, its progress is not without risks, as this and other expert reports point out. In this regard, the constant involvement of governments, academics and industry will be necessary to ensure that AI continues to evolve on the path of serving the common good.