Sony AI research head, Dr. Hiroaki Kitano, has aspirations to develop robots that are as capable as today’s top scientific brains. His goal is to launch the Nobel Turing Challenge and create an AI clever enough to win a Nobel Prize by 2050. The AI he plans to build is, according to him, a hybrid type of science that can carry systems biology and other disciplines into the next stage.
Today’s AIs are the culmination of decades of scientific inquiry and testing, beginning in 1950 with Alan Turing’s foundational work, computing machinery, and intelligence. These AI and machine learning systems have evolved from laboratory curiosity to vital data processing and analytical tools over the years, and have shown to be a benefit to scientific study in a range of academic sectors.
They’ve aided scientists in discovering genetic markers ripe for novel therapies, have hastened the discovery of powerful new medications and cures, and even helped scientists in publishing their studies. Throughout this time, AI/ML systems have frequently been limited to merely processing massive data sets and doing brute force computations, rather than driving the research itself.
Kitano wants to take them a step further by effectively creating a constellation of software and hardware modules dynamically interacting to accomplish tasks or what he refers to as an “AI Scientist.” He wrote in June that rather than rediscovering what we already know or attempting to emulate theorized human cognitive processes, the distinguishing feature of the Turing challenge is to field the system into an open-ended space to investigate major findings. The main objective is to reformulate and develop a new type of scientific discovery.
He added that the value lies in the development of machines that can make discoveries continuously and autonomously. These AI scientists will generate and verify as many hypotheses as possible, with the expectation that some of them will lead to major discoveries on their own or serve as the foundation for other major discoveries. The system’s primary goal is the capacity to develop hypotheses exhaustively and swiftly validate them.
According to Kitano, it will be, first, a collection of useful tools that will automate a portion of the research process in both trials and data processing. One of the first stages, for example, is laboratory automation at the level of a closed-loop system rather than isolated automation, and the amount of autonomy may gradually rise to develop a greater variety of hypotheses and verification. However, it will continue to be a tool or a companion for human scientists for the foreseeable future.
Kitano noted that by delegating the heavy intellectual burden of developing ideas to investigate to AI scientists, their human counterparts will have more time to focus on research techniques and determine which hypotheses to investigate.
As always, avoiding the “Black Box Effect” and implicit bias, both in the software’s design and the data sets on which it is trained, will be critical to establishing and maintaining trust in the system.
Kitano explained that for scientific findings to be accepted in the scientific community, they must be supported by persuasive evidence and logic. AI scientists will be able to describe the mechanisms underlying their findings, and AI scientists that lack such explanatory abilities would be less desired than those who have.
When AI scientists become competent enough to handle complicated events, there is the possibility of discovering things that human scientists do not instantly understand, Kitano confessed. So, what happens if and when an AI scientist discovers a discovery or devises an experiment that humans, even with an explanation, cannot immediately understand?
In theory, someone might operate extremely autonomous AI scientists without constraints and without worrying if their finding is understood, Kitano explained. When such an AI scientist is acknowledged for making significant scientific discoveries, Kitano is confident that operating parameters will be established to ensure safety and prevent misuse.
The introduction of an AI scientist capable of collaborating with human researchers may raise some difficult questions about who should be credited with the discoveries made. Is it the AI that generated the hypothesis and ran the experiment, the human who oversaw the effort, or the academic institution/corporate entity that owns the operation?
For example, Kitano cites a recent Australian court ruling that acknowledged the DABUS “artificial neural system” as an inventor for patent applications. Kitano also mentions Satoshi Nakamoto and his creations of blockchain and bitcoin. There is an instance when a crucial contribution was published as a weblog and regarded seriously, he claims.
He added that if a developer of an AI scientist were determined to create a virtual persona of a scientist with an ORCID ID, for demonstration of technological achievement, product promotion, or for any other motivation, it would be almost impossible to distinguish between the AI and human scientist.
But if this challenge results in a truly groundbreaking medical advancement that benefits the entire humanity, does it matter if the discovery was made by a human or a machine?