Mol2vec: Unsupervised Machine Learning Approach with Chemical Intuition
Page 1 of 1
Mol2vec: Unsupervised Machine Learning Approach with Chemical Intuition
Git project: https://github.com/samoturk/mol2vec
https://mol2vec.readthedocs.io/en/latest/index.html?highlight=features#features-main-mol2vec-module
https://metacyc.org/
------
Mol2vec: Unsupervised Machine Learning Approach with Chemical Intuition
Sabrina Jaeger , Simone Fulle* , and Samo Turk*
BioMed X Innovation Center, Im Neuenheimer Feld 515, 69120 Heidelberg, Germany
J. Chem. Inf. Model., 2018, 58 (1), pp 27–35
DOI: 10.1021/acs.jcim.7b00616
Publication Date (Web): December 22, 2017
Copyright 2017 American Chemical Society
*E-mail: fulle@bio.mx., *E-mail: turk@bio.mx.
Cite this:J. Chem. Inf. Model. 58, 1, 27-35
Abstract
Inspired by natural language processing techniques, we here introduce Mol2vec, which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Like the Word2vec models, where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that point in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing the vectors of the individual substructures and, for instance, be fed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pretrained once, yields dense vector representations, and overcomes drawbacks of common compound feature representations such as sparseness and bit collisions. The prediction capabilities are demonstrated on several compound property and bioactivity data sets and compared with results obtained for Morgan fingerprints as a reference compound representation. Mol2vec can be easily combined with ProtVec, which employs the same Word2vec concept on protein sequences, resulting in a proteochemometric approach that is alignment-independent and thus can also be easily used for proteins with low sequence similarities.
https://pubs.acs.org/doi/10.1021/acs.jcim.7b00616
https://mol2vec.readthedocs.io/en/latest/index.html?highlight=features#features-main-mol2vec-module
https://metacyc.org/
------
Mol2vec: Unsupervised Machine Learning Approach with Chemical Intuition
Sabrina Jaeger , Simone Fulle* , and Samo Turk*
BioMed X Innovation Center, Im Neuenheimer Feld 515, 69120 Heidelberg, Germany
J. Chem. Inf. Model., 2018, 58 (1), pp 27–35
DOI: 10.1021/acs.jcim.7b00616
Publication Date (Web): December 22, 2017
Copyright 2017 American Chemical Society
*E-mail: fulle@bio.mx., *E-mail: turk@bio.mx.
Cite this:J. Chem. Inf. Model. 58, 1, 27-35
Abstract
Inspired by natural language processing techniques, we here introduce Mol2vec, which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Like the Word2vec models, where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that point in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing the vectors of the individual substructures and, for instance, be fed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pretrained once, yields dense vector representations, and overcomes drawbacks of common compound feature representations such as sparseness and bit collisions. The prediction capabilities are demonstrated on several compound property and bioactivity data sets and compared with results obtained for Morgan fingerprints as a reference compound representation. Mol2vec can be easily combined with ProtVec, which employs the same Word2vec concept on protein sequences, resulting in a proteochemometric approach that is alignment-independent and thus can also be easily used for proteins with low sequence similarities.
https://pubs.acs.org/doi/10.1021/acs.jcim.7b00616
Similar topics
» MaterialsProject.org - Services for Machine Learning and PyMatGen
» Machine Learning for Understanding Materials Synthesis
» Machine learning enables predictive modeling of 2-D materials
» 'AI brain scans' reveal what happens inside machine learning
» Team opens new frontier of vast chemical 'space', makes dozens of new chemical entities
» Machine Learning for Understanding Materials Synthesis
» Machine learning enables predictive modeling of 2-D materials
» 'AI brain scans' reveal what happens inside machine learning
» Team opens new frontier of vast chemical 'space', makes dozens of new chemical entities
Page 1 of 1
Permissions in this forum:
You cannot reply to topics in this forum