Talk Abstract: Deep Learning in NLP has been dominated in the past years by recurrent and convolutional models. But other models emerge to improve translation quality and performance. Alex has developed a translator for his team and clients using a new neural network architecture called the Transformer. Unlike traditional translator models, this one solely focuses on attention instead of recurrence and develops powerful NLP models in a fraction of the training time. Alex will explain how the translator has been built, give a live demo, and discuss how the Transformer is able to overcome pitfalls of RNN models.
Bio: As a data scientist, Alexandre has worked on a range of use cases, from creating models that predict fraud to building specific recommendation systems. He especially loves using deep learning with text or sports data. Even when he’s playing sport or having fun with friends, Alexandre sees numbers and patterns everywhere, bringing him quickly back to his laptop to try out new ideas. He has been a data scientist at Dataiku for more than two years. He works on several bank use cases as loan delinquency for leasing and refactoring institutions but also marketing use cases for retailers.