Geordie Williamson (University of Sydney) just posted a personal and informal account of what a pure mathematician might expect when using tools from deep learning in their research with the title Is deep learning a useful tool for the pure mathematician?. I very much recommend reading it in all detail. Beside a very accessible brief mathematical description of neural networks, it contains multiple highlights I very much enjoyed:
- I learned about their Machine Learning for the Working Mathematician Seminar in 2022.
- Refering to the paper Learning algebraic structures: preliminary investigations by Y.-H. He and M. Kim, he writes For a striking mathematical example [...] support vector machines are trained to distinguish simple and non-simple finite groups, by inspection of their multiplication table. Yang-Hui He was invited speaker at our workshop Research Data in Discrete Mathematics.
- Section 5.2. explains how to train a neural network the descents of a permutation and how different representations of the input drastically influences its quality.
- It briefly discusses counter-examples in combinatorics and refers to Adam Zsolt Wagner Constructions in combinatorics via neural networks for further reading.
In the concluding remarks, he writes The use of deep learning in pure mathematics is in its infancy. I very much agree and this priority programme will be a perfect context to develop its maturity.