S07 Generative AI
Overview
This course is designed to provide an in-depth understanding of generative artificial intelligence with a focus on Large Language Models (LLMs) and Diffusion models. The course covers the foundations of LLMs, including their architecture, training, and fine-tuning, and explores their use in natural language processing tasks such as text generation, summarization, and translation.
Topics Covered
- Introduction to Generative AI
- Foundations of LLMs
- Transformer Architecture
- Training LLMs
- Fine-tuning LLMs for specific tasks
- Natural language processing tasks with LLMs (text generation, summarization, translation, etc.)
- Applications of LLMs
- Challenges and limitations of generative AI
Learning Objectives
- Understand the foundations of generative AI, including LLMs
- Understand the transformer architecture and how it enables LLMs to generate human-like language
- Understand the training process for LLMs
- Understand how to fine-tune LLMs
- Understand the challenges and limitations of generative AI
Prerequisites
- Basic programming knowledge (Python)
- Basic knowledge of machine learning fundamentals (neural networks, training, etc.)
- Familiarity with natural language processing concepts
Course Format
The course will consist of two lectures and hands-on coding exercises. The lectures will cover the theoretical foundations of LLMs, and the hands-on coding exercises will provide practical experience with prompt engineering, training and fine-tuning these models.
Assessment
MCQ test