This course provides an introduction to essential concepts and terminology in the field of generative artificial intelligence (GenAI). It offers an overview of the historical background and current stakeholder landscape, and it delves into the important factors to consider when implementing GenAI in government and public service settings.
We hope the following information broadens your insight into GenAI and empowers you to find applications for this knowledge in your day-to-day work.
SCROLL TO BEGIN
Chapter 1 - Generative AI for All
What Sets GenAI Apart from Traditional AI
The History of GenAI and The Current GenAI Landscape
Mechanics Behind GenAI Models
Types of GenAI Models
User Interface of GenAI Models
Chapter 2 - GenAI in Government and Public Services
Chapter 3 - Responsible GenAI Implementation
One of the most fascinating aspects of GenAI is its ability to learn from vast amounts of data by analyzing patterns, structures, and relationships within that data and then generate entirely novel outputs that reflect the intricacies of the original content, such as text, images, audio, video and even code.
GenAI Output Types:
Text Output
Audio Output
Image Output
Video Output
Code Output
This ability to generate new content is what sets GenAI apart from traditional AI, which focuses on tasks such as classification, prediction, optimization, and decision-making based on existing data and pre-defined rules.
While GenAI represents a subset of AI techniques, by focusing on learning and generating new data, GenAI expands the scope of AI beyond traditional problem-solving and decision-making tasks.
It is essential to remember that behind the scenes, these systems are the result of decades of meticulous research, algorithms, and the collaborative efforts of dedicated individuals and organizations driving the frontiers of artificial intelligence. The history of GenAI development is complex and intertwined with the broader history of artificial intelligence.
Understanding the history of GenAI means recognizing how various stakeholders have shaped and been shaped by this technology.
The field of GenAI has undergone significant change to reach its current state of unprecedented growth. While GenAI has come a long way, it is still rapidly evolving. An increasing number of stakeholders, including researchers, government agencies, policy experts, regulatory authorities, industry leaders, and end users, are engaging with GenAI technology and contributing to its responsible development, deployment, and adoption in federal and government spaces.
Now that we have an understanding of the history of GenAI and today's evolving landscape, let's look at the mechanics of a GenAI model.
There are several essential components that are necessary for a functioning GenAI model.
Here are a few of them:
Training Data
Computing Power
Model Architecture
Input Prompt
Generated Output
First, a GenAI model needs to be trained. The training process is an iterative and resource-intensive process.
Training Data:
Used to capture patterns, structures, and dependencies, allowing GenAI models to learn and generate new content. High-quality and representative training data is crucial, with the training data's size, diversity, and quality significantly impacting the performance and creativity of the GenAI system.
Computing Power:
Significant computational resources are required for GenAI applications, often using Graphic Processing Units (GPUs) for faster processing and efficient handling of complex algorithms. However, the required computing power depends on factors such as the model's size, task complexity, and available hardware resources. Therefore, this computing power can be distributed across multiple machines or provided by the Cloud.
Model Architecture:
Defines the structure and design of the GenAI model. Different types of architectures can be used, depending on the specific task and the desired output. The architecture determines how the model learns and generates new content.
The learnable components of the model that are adjusted during the training process to improve the model's performance are called parameters. They determine the model's behavior and output and are represented as numerical values or weights.
Deep Neural Networks (DNNs) are a class of artificial neural networks that consist of multiple (deep) layers of interconnected nodes, known as neurons or units. DNNs have quickly become the dominant approach for GenAI model architectures because they allow for quick capture of complex input data patterns and generate cohesive content.
Let's take a look at a couple types of architectures.
Deep Learning Architectures
Generative Adversarial Networks (GANs): Consist of two neural networks—a generator and a discriminator. The generator generates new samples, such as images or text, while the discriminator distinguishes between real and generated samples. Through an adversarial training process, GANs learn to generate content that is increasingly indistinguishable from real data.
Deep Learning Architectures
Transformer Models:
Originally introduced for natural language processing tasks and have been adapted for GenAI. Transformers use self-attention mechanisms to capture dependencies between input elements. They have been successfully applied to tasks such as language generation, image generation, and music generation. For example, transformer models can analyze words in a sentence to detect relationships between words and learn how they relate.
Foundation Models: Pre-trained machine learning models that serve as the starting point for GenAI applications. These models, such as a Generative Pre-trained Transformer (GPT), have been trained on vast amounts of data to learn language, images, or other types of information.
Foundation models serve as the base for creating more specialized models. Among these, Large Language Models (LLMs) have garnered significant attention in their text generating capabilities. We will delve deeper into LLMs in the following section.
LLMs are typically built using Transformer Architectures, trained on huge amounts of text data with a large number of parameters, often in the order of billions. All this allows LLMs to generate coherent text that resembles the patterns and structures present in the training data.
Some examples include:
The GPT-Series, developed by OpenAI
PaLM, developed by Google
LLaMA, developed by Meta
Starting in the mid-to-late 2010s, user interfaces for GenAI systems became accessible to the public, thanks to technological advancements and an increasing demand for user-friendly access to these powerful tools.
Frontend applications play a crucial role in enabling users to interact with GenAI models in a user-friendly and intuitive manner. These applications provide a graphical interface or platform that allows users to input prompts, explore generated content, and customize the model's output.
This section introduced key components of GenAI and explored the evolution of GenAI that has paved the way for the current landscape. It shed light on the ways that this technology and humans have influenced one another on the path toward innovation. GenAI is a powerful tool for the government space to harness and effectively utilize, capable of generating new content including images, videos, text, code, and audio.
Next, we will explore use cases for GenAI's five main capabilities. The foundational understanding, acquired in this chapter, will help illuminate how GenAI can simultaneously impact and be impacted by government and public service contexts.