Join us for a FREE hands-on Meetup webinar on Generate Code with Azure OpenAI Service (AI102) | Friday, November 22nd · 5:00 PM IST/ 7:30 AM EST Join us for a FREE hands-on Meetup webinar on Generate Code with Azure OpenAI Service (AI102) | Friday, November 22nd · 5:00 PM IST/ 7:30 AM EST
Search
Close this search box.
Search
Close this search box.

Generative AI NLP Specialization | Level 3

Dive Deeper into LLMs and it’s fine tuning techniques

Duration

3 Days

Level

Intermediate Level

Design and Tailor this course

As per your team needs

Edit Content

Welcome to “Generative AI NLP Specialization | Level 3”, an advanced course that will immerse you in the fascinating world of fine-tuning Large Language Models (LLMs) and harnessing them for real-world applications. This comprehensive journey will equip you with the knowledge and skills needed to fine-tune LLMs effectively and align them with human values. Through a structured exploration of topics and practical exercises, you’ll delve into the nuances of reinforcement learning, parameter-efficient fine-tuning, and the evaluation of fine-tuned models. 

By the end of this course, you will be able to gain a deep understanding of the intricacies of fine-tuning Large Language Models, including instruction fine-tuning, parameter-efficient fine-tuning, and techniques like LoRA and Soft Prompts.

Edit Content
  • Developers and software engineers interested in learning GenerativeAI
  • AI enthusiasts and professionals aiming to build intelligent and innovative solutions
  • Even Data scientists and machine learning practitioners seeking to enhance their skills in working with GenAI models
Edit Content
  • Introduction to Large Language Models
  • Pre-training Large Language Models
  • Computational Challenges in Training LLMs
  • Scaling Laws and Compute-Optimal Models
  • Fine-tuning Techniques
    • Instruction Fine-tuning
    • Fine-tuning on a Single Task
    • Multi-task Instruction Fine-tuning
    • Parameter Efficient Fine-tuning (PEFT)
    • PEFT Techniques: LoRA and Soft Prompts
  • Model Evaluation and Benchmarks
    • Evaluating Fine-tuned Models
    • Introduction to Benchmarks
  • Introduction
  • Overview of Fine-tuning Large Language Models
  • Importance of Aligning Models with Human Values
  • Reinforcement Learning from Human Feedback (RLHF)
    • Introduction to RLHF
    • Obtaining Feedback from Humans
    • Developing a Reward Model for RLHF
  • Fine-tuning with Reinforcement Learning
    • Fine-tuning Process using RLHF
    • Techniques for Optimizing RLHF Performance
    • Optional Video: Proximal Policy Optimization
    • Addressing Reward Hacking
  • Scaling Human Feedback
    • Challenges and Considerations
    • Strategies for Collecting and Incorporating Large-scale Feedback
    • Evaluation and Assessment
    • Methods for Evaluating Fine-tuned Language Models
    • Assessing Model Performance in Alignment with Human Values
  • Lab: Transforming Human Interactions with AI ( RLHF)
Edit Content

Participants should have attended Generative AI NLP Specialisation | Level 2 course or have an equivalent knowledge.

Connect

we'd love to have your feedback on your experience so far