The spectacular progress of deep learning over the last 15 years has been driven not only by the ability to harness massive computational power, but also by leveraging two relatively old abstractions.

While these models are still used, the last five years have seen the emergence of graphs and transformers as fundamental building blocks of modern deep learning architectures in natural language understanding (Bert & co), protein research (DeepMind’s Alphafold) and image generation (Clip, ViT, stable diffusion).

In this talk, Louis will discuss the evolution of these models and where we are today, providing a basis for the panel discussion.

A panel dicussion with Ryan Greenhalgh, Jason O’Sullivan, Tony Seale and Martin Szummer – hosted by Louis Dominique Vainqueur.

A panel of experts will discuss and debate the future of geometric deep learning, transformers, and gargantuan computational power. The discussion will cover our panellists’ favourite tools, the source of ground-breaking innovation in the next 5+ years and what those innovations may be.

Supported by