Duration
5 Days
Level
Intermediate Level
Design and Tailor this course
As per your team needs
Edit Content
This comprehensive course provides an in-depth exploration of Generative AI and Large Language Models (LLMs). Through a series of modules and hands-on lab exercises, participants will gain a solid understanding of key concepts, tools, and techniques in Generative AI. The course is designed to equip participants with the knowledge and skills necessary to leverage LLMs for various applications, including image generation, language processing, and reinforcement learning.
Edit Content
- Developers and software engineers interested in learning GenerativeAI
- AI enthusiasts and professionals aiming to build intelligent and innovative solutions
- Even Data scientists and machine learning practitioners seeking to enhance their skills in working with OpenAI models
Edit Content
- A quick recap of Generative AI & LLMs
- Transformer Architecture
- Working with LangChain
- AutoGPT
- VAEs for image generation
- Getting started with Image Generation
- Conditional Image Generation based on class labels or text description
- Concepts related to Style Transfer
- Approaches for GANs
- CycleGAN
- pix2pix
- Understanding OpenAI’s DALL-E Image Generation Process
- Building Diffusion Models from Scratch
- Starting with Noise and Progressing to Final Images
- Developing Intuition at Each Step
- MidJourney: Demonstration App
- Lab(s): DALL-E 2 through Azure OpenAI Services
- Introduction to Large Language Models
- Pre-training Large Language Models
- Computational Challenges in Training LLMs
- Scaling Laws and Compute-Optimal Models
- Fine-tuning Techniques
- Instruction Fine-tuning
- Fine-tuning on a Single Task
- Multi-task Instruction Fine-tuning
- Parameter Efficient Fine-tuning (PEFT)
- PEFT Techniques: LoRA and Soft Prompts
- Model Evaluation and Benchmarks
- Evaluating Fine-tuned Models
- Introduction to Benchmarks
- Fine-tuning Lab Setup on Azure OpenAi Studio
-
- Lab Exercise Walkthrough
- Finetuning on a given use case
- Introduction
- Overview of Fine-tuning Large Language Models
- Importance of Aligning Models with Human Values
- Reinforcement Learning from Human Feedback (RLHF)
- Introduction to RLHF
- Obtaining Feedback from Humans
- Developing a Reward Model for RLHF
- Fine-tuning with Reinforcement Learning
- Fine-tuning Process using RLHF
- Techniques for Optimizing RLHF Performance
- Optional Video: Proximal Policy Optimization
- Addressing Reward Hacking
- Scaling Human Feedback
-
- Challenges and Considerations
- Strategies for Collecting and Incorporating Large-scale Feedback
- Evaluation and Assessment
-
- Methods for Evaluating Fine-tuned Language Models
- Assessing Model Performance in Alignment with Human Values
- Applications of Generative Models
- Importance and Usefulness
- Potential Applications
-
- Text Generation Use Cases
- Audio Synthesis Use Cases
- Text-to-Image Generation
- Building NLP Applications using OpenAI API
- Summarization, Text Classification, and Fine-tuning GPT Models
- Building Midjourney Clone Application
-
- Using OpenAI DALL-E and StableDiffusion on Hugging Face
- Generative AI Project Lifecycle
- Using LLMs in Applications
- Interacting with External Applications
- Helping LLMs Reason and Plan with Chain-of-Thought
- Program-aided Language Models
- LLM Application Architectures
- Responsible AI Considerations
-
- Concepts for LangChain Projects
- Utilizing embeddings and vector data stores
- Enhancing LangChain performance
- LangChain Framework
- Taking LLMs out of the box
- Integrating LLMs into new environments using memories, chains, and agents
- Concepts for LangChain Projects
- Developing a question-answering application with LangChain, OpenAI, and Hugging Face Spaces
Edit Content
Should have attended course on “Generative AI for Developers | Level 1” or have an equivalent knowledge