In the world of Natural Language Processing (NLP), we often need to extract some of the information from natural language into a structured format: for example, converting a natural query to SQL. Generative AI is a great tool for this task, however, transformer-based translation models, such as GPT, sometimes struggle to adhere to the desired format, and to support the variety of expression of a natural language.

In this session we will delve into the intricacies of using transformer-based models to extract information from natural language to a predefined structure. We’ll explore effective best practices, including Constrained Generative AI, a technique that can eliminate syntactic errors. Additionally, we will explore the pre-processing, data generation and augmentation methods that helped us achieve near-perfect accuracy in extracting dates and times from textual data. By the end of the session, you will gain insights into harnessing generative AI for structured output, opening doors to more accurate and efficient NLP applications. Basic familiarity with the transformer architecture is required.

 

Technical Level: Introductory level/students (some technical knowledge needed)