Different problems require a unique set of solutions. This scenario holds true for organic molecules. There are specific organic molecules for its counterpart. Some can be used for digital lighting display, some for disease treatment, some for plastics, paints, and pigments. The uniqueness of each organic molecule is dependent on its structure---the manner in which atoms are connected. Sometimes, identifying a specific molecule is easier compared to creating it using steps of chemical reactions.
Organic chemists utilize retrosynthetic analysis in working from the target molecule to the starting materials. With thousands of alternative chemical reactions, the chemist has to intelligently choose which among these could be used.
Columbia Engineering scientists have designed a reinforcement learning technique that utilizes a neural network model to intelligently choose the "best" reaction at each step of the retrosynthetic process.
"This form of AI provides a framework for researchers to design chemical syntheses that optimize user specified objectives such synthesis cost, safety, and sustainability," according to Eureka Alert. The findings of the research were published in the journal ACS Central Science that showed more success rate than current strategies.
"Reinforcement learning has created computer players that are much better than humans at playing complex video games. Perhaps retrosynthesis is no different! This study gives us hope that reinforcement-learning algorithms will be perhaps one day better than human players at the 'game' of retrosynthesis," says Alán Aspuru-Guzik, professor of chemistry and computer science at the University of Toronto, who was not involved with the study.
There are similarities when it comes to the challenges in retrosynthetic planning and games like Chess because of the humongous combinations of each possible choice. This is different compared to earlier studies that used heuristic scoring functions.
"We're the first to apply reinforcement learning to the problem of retrosynthetic analysis," says Kyle Bishop, associate professor of chemical engineering. "Starting from a state of complete ignorance, where the model knows absolutely nothing about strategy and applies reactions randomly, the model can practice and practice until it finds a strategy that outperforms a human-defined heuristic."
The researchers aimed to fit their reinforcement learning model that targets the performance indicator of using the number of reaction steps in determining a "good" synthetic pathway. The team trained the neural network of the model to estimate the amount of synthesis for each reaction.
"We expect that our retrosynthesis game will soon follow the way of chess and Go, in which self-taught algorithms consistently outperform human experts," Bishop notes. "And we welcome the competition. As with chess-playing computer programs, competition is the engine for improvements in the state-of-the-art, and we hope that others can build on our work to demonstrate even better performance."