Join us for a FREE hands-on Meetup webinar on Generate Code with Azure OpenAI Service (AI102) | Friday, November 22nd · 5:00 PM IST/ 7:30 AM EST Join us for a FREE hands-on Meetup webinar on Generate Code with Azure OpenAI Service (AI102) | Friday, November 22nd · 5:00 PM IST/ 7:30 AM EST
Search
Close this search box.
Search
Close this search box.

Generative AI for Developers | Level 1

Building Intelligent Solutions with Generative AI

Duration

5 Days

Level

Intermediate Level

Design and Tailor this course

As per your team needs

Edit Content

The “Generative AI for Developers | Level 1” is a comprehensive program designed to equip participants with an in-depth understanding of GenAI, a cutting-edge field that combines artificial intelligence (AI) with Generative AI algorithms to solve complex problems and optimize various processes. Participants will gain proficiency in the fundamental concepts, methodologies, and practical applications of GenAI, enabling them to leverage this innovative technology in their own projects.

Edit Content
  • Developers and software engineers interested in learning GenerativeAI
  • AI enthusiasts and professionals aiming to build intelligent and innovative solutions
  • Even Data scientists and machine learning practitioners seeking to enhance their skills in working with OpenAI models
Edit Content

Recap of DeepLearning – 1 hour

  • DeepLearning Basics & Artificial Neural Network Overview
  • Building the Vocabulary – Terms & Concepts 
  • Training the Neural Networks
  • Key Types of Neural Networks – CNN, RNN, LSTM, GANs 
  • Lab(s): Working with Neural Networks 

Deep Learning on Azure –  3 hours

  • Azure AI Framework
  • Various ways to develop AI Applications on Azure
  • Examples of each of the options 
  • Getting started with Azure AI 
  • Lab(s): Getting started with Azure OpenAI Studio

Evolution of Natural Language Processing – 4 hours

  • Introduction to NLP
  • Rule-Based Approaches: Keyword matching and grammar rules
  • Statistical Methods: n-grams, probabilistic context-free grammars
  • Machine Learning for tasks like part-of-speech tagging, named entity recognition
  • Word embeddings: Word2Vec, GloVe for semantic relationships
  • Attention Mechanisms: Machine translation, text summarization, sentiment analysis
  • Transformers: Architecture, self-attention, pre-trained language models like BERT, GPT
  • Lab(s): NLP use cases with key approaches

Getting started with Generative AI – 2 hours

  • Understanding Generative AI 
  • Types of Generative Models – autoregressive models, variational autoencoders, and generative adversarial networks (GANs)
  • Categorizing generative models based on learning algorithms: likelihood-based vs. likelihood-free
  • Motivation for generative modeling compared to discriminative models
  • Characteristics of generative models: density estimation, data simulation, representation learning
  • Lab(s): Getting started with Generative AI Models

Large Language Models (LLMs) – 4 hours

  • Introduction to LLMs
  • Use cases and tasks of LLMs
  • Architecture of LLMs
  • Generative Models for Text: Introduce generative models for text generation in NLP, including approaches like language modeling with autoregressive models, variational autoencoders (VAEs), and transformers.
  • Evolution of text generation techniques
  • Understanding role of Vector Databases
  • Prompting and Prompt engineering
  • Lab(s): Prompt Engineering

Evaluating LLMs – 3 hours

  • Evaluating LLMs Significance and impact of Evaluation on natural language understanding and generation tasks
  • Various Evaluation Metrics used to assess the quality and performance of LLMs
    • Perplexity, 
    • BLEU, 
    • ROUGE, 
    • METEOR, and others commonly used in machine translation, summarization, and text generation tasks
  • Human Evaluation in assessing LLMs
  • Intrinsic & Extrinsic Evaluation
  • Dataset Quality and Bias
  • Interpretability and Explainability
  • Robustness and Generalization
  • Evaluation in Low-Resource and Multilingual Settings
  • Fairness and Bias Evaluation

Fine Tuning Basics – 2 hours

  • Background and concept
  • Curse of dimensionality
  • Graphical models (Bayesian networks)
  • Comparison of generative and discriminative models
  • Lab(s): Fine Tuning for specific tasks

Scaling Human Feedback – 2 hours

  • Challenges and considerations in scaling human feedback
  • Strategies for collecting and incorporating large-scale feedback

Lab(s): Text Generation on Azure OpenAI – 2 hours

BERT Model and OpenAI’s GPT Models – 5 hours

    • Understanding the architecture of BERT
    • Introduction to OpenAI’s GPT models
    • Generating text using GPT models
    • Exploring image generation use cases
    • Lab(s): Working with GPT Model

 

Introduction to AutoGPT – 1 hour

  • Understanding how AutoGPT works
  • Architecture and autonomous iterations
  • Memory management and multi-functionality

GPT-4: Fully Autonomous Models – 1 hour

  • Overview of GPT-4 and its unsupervised operation
  • The future of generative agents

AutoGPT Use Cases – 2 hours

  • Examples of using AutoGPT framework in various applications:
    • Writing codes
    • Building an app
    • Ordering a pizza
    • Researching
    • Preparing podcasts
    • Improving Google Workspace
    • Philosophizing
    • Ethical considerations in AI
Edit Content
  • Familiarity with programming concepts and proficiency in a programming language (Python is recommended)
  • Basic understanding of statistics, machine learning, and deep learning concepts
  • Familiarity with any cloud platforms such as AWS, Azure, or GCP
  • Knowledge of Jupyter Lab or Google Colaboratory notebooks

Connect

we'd love to have your feedback on your experience so far