Nicolas Chaulet

AI powered research and Lila Sciences

· Nicolas Chaulet

I recently came across Lila Sciences and this podcast from their CEO and it got me thinking about the place of AI in scientific research. About Lila Sciences, in short they are combining generative AI with an autonomous material research lab in order to accelerate material discovery with a broad range of applications from drug discovery to carbon capture technologies.

In the podcast they start from the observation that scientific research is chaotic and this resonated a lot with me. During my early career in academia doing Mathematics I had many such conversations with peers, where is this all going? Am I doing anything useful there? Can you be targeting specific avenues that you feel will have a greater impact? And very often good research is not about direct applications, it is about lifting a whole field and opening new routes that were not accessible before. It is very true in Mathematics and probably in many other disciplines.

So back to Lila, they recognize this chaotic nature and want to navigate this uncertainty of outcome with more automation and therefore a greater capacity to conduct research. They also insist on the fact that it cannot be brute-forced either, and that’s where they think human scientists will still have a role to play for setting broad directions and by being creative. The machine will then work through the details and evaluate various pathways that are defined by humans. Doing a parallel with what I have seen in academic research, I imagine this setup as a mass of PhD students (the generative AI together with the autonomous lab) working under the leadership of experienced researchers.

Now the question of where humans fit in this process is far from being clear in my opinion. In the short term sure, you will need humans to setup the systems and engineer all this. But then assuming that the machines get to a certain level of autonomy I think we have to find ways in which the human scientists keep learning in order to be able to provide this broad vision and act as this experienced scientist for the system. And full automation sort of prevents that. You just have to use some code generation software to realize that.

As a software engineer I learned by doing. By breaking code and fixing it. By reading documentation. Just like when I was a machine learning engineer I learned by running 100s of experiments, by looking at specific examples in my datasets, by spending frustrating days not understanding why I can’t replicate some research paper. In a world where machines do more and more of the tedious work it becomes harder to learn by doing. I think that will be one of the challenges of companies like Lila, how do they create a system that allow humans to keep learning? All in all, I love what they are going after and the getting digital and physical machines to work together in order to produce innovative scientific research is fascinating on its own.