This course is not mobile-friendly.
Please access this site on a desktop device.

Generative AI for All

This course provides an introduction to essential concepts and terminology in the field of generative artificial intelligence (GenAI). It offers an overview of the historical background and current stakeholder landscape, and it delves into the important factors to consider when implementing GenAI in government and public service settings.

We hope the following information broadens your insight into GenAI and empowers you to find applications for this knowledge in your day-to-day work.

SCROLL TO BEGIN

Chapter 1 - Generative AI for All

What Sets GenAI Apart from Traditional AI

The History of GenAI and The Current GenAI Landscape

Mechanics Behind GenAI Models

Types of GenAI Models

User Interface of GenAI Models

Chapter 2 - GenAI in Government and Public Services

Chapter 3 - Responsible GenAI Implementation

Generating New Content

One of the most fascinating aspects of GenAI is its ability to learn from vast amounts of data by analyzing patterns, structures, and relationships within that data and then generate entirely novel outputs that reflect the intricacies of the original content, such as text, images, audio, video and even code.

GenAI Output Types:

Text Output
Audio Output
Image Output
Video Output
Code Output

Generative AI vs Traditional AI

This ability to generate new content is what sets GenAI apart from traditional AI, which focuses on tasks such as classification, prediction, optimization, and decision-making based on existing data and pre-defined rules.

While GenAI represents a subset of AI techniques, by focusing on learning and generating new data, GenAI expands the scope of AI beyond traditional problem-solving and decision-making tasks.

History of GenAI

It is essential to remember that behind the scenes, these systems are the result of decades of meticulous research, algorithms, and the collaborative efforts of dedicated individuals and organizations driving the frontiers of artificial intelligence. The history of GenAI development is complex and intertwined with the broader history of artificial intelligence.

Early GenAI (1950s-1960s):

The field of AI emerged in the 1950s, and early efforts in GenAI focused on rule-based systems and symbolic approaches. Researchers attempted to create computer programs that could generate new content by following predefined rules or using logical reasoning. This period was characterized by lots of funding, enthusiasm, and overly optimistic predictions.

AI Winter (1970s-1980s):

The first AI winter occurred in the 1970s and 1980s when initial enthusiasm for AI technologies could not meet the high expectations. Funding and interest in AI research declined due to limitations in computing power, lack of robust algorithms, and challenges in achieving human-level performance.

Expert Systems (early to mid-1980s):

During the AI winter, the focus shifted to expert systems, which were rule-based systems designed to mimic human expertise in specific domains. These systems were successful in certain applications but fell short of being comprehensive.

Second AI Winter (late 1980s-early 1990s):

The second AI winter occurred due to unrealistic expectations, overpromising, and underdelivering on AI capabilities. Funding and interest in AI research declined once again, partly due to a lack of significant breakthroughs.

Neural Networks and Deep Learning (1990s-early 2000s):

The resurgence of interest in neural networks and deep learning in the 1990s paved the way for advancements in GenAI. Researchers explored various neural network architectures, such as generative adversarial networks (GANs) and recurrent neural networks (RNNs), to generate new content.

AI Revolution (2010s-present)

The AI revolution, which started around the mid-2010s, brought significant advancements in GenAI. Breakthroughs in deep learning, availability of large-scale datasets, and increased computational power fueled the development of powerful generative models. Generative adversarial networks (GANs), in particular, gained attention for their ability to generate realistic images, videos, and other content. Many frontend applications were developed which made GenAI accessible to the public.

The Current Landscape of GenAI

Understanding the history of GenAI means recognizing how various stakeholders have shaped and been shaped by this technology.

The field of GenAI has undergone significant change to reach its current state of unprecedented growth. While GenAI has come a long way, it is still rapidly evolving. An increasing number of stakeholders, including researchers, government agencies, policy experts, regulatory authorities, industry leaders, and end users, are engaging with GenAI technology and contributing to its responsible development, deployment, and adoption in federal and government spaces.

Now that we have an understanding of the history of GenAI and today's evolving landscape, let's look at the mechanics of a GenAI model.

The Mechanics of a Generative AI Model

There are several essential components that are necessary for a functioning GenAI model.

Here are a few of them:

Training Data
Computing Power
Model Architecture
Input Prompt
Generated Output

The Mechanics of a Generative AI Model

First, a GenAI model needs to be trained. The training process is an iterative and resource-intensive process.

Training Data:
Used to capture patterns, structures, and dependencies, allowing GenAI models to learn and generate new content. High-quality and representative training data is crucial, with the training data's size, diversity, and quality significantly impacting the performance and creativity of the GenAI system.

The Mechanics of a Generative AI Model

Computing Power:
Significant computational resources are required for GenAI applications, often using Graphic Processing Units (GPUs) for faster processing and efficient handling of complex algorithms. However, the required computing power depends on factors such as the model's size, task complexity, and available hardware resources. Therefore, this computing power can be distributed across multiple machines or provided by the Cloud.

The Mechanics of a Generative AI Model

Model Architecture:
Defines the structure and design of the GenAI model. Different types of architectures can be used, depending on the specific task and the desired output. The architecture determines how the model learns and generates new content.

The learnable components of the model that are adjusted during the training process to improve the model's performance are called parameters. They determine the model's behavior and output and are represented as numerical values or weights.

The Mechanics of a Generative AI Model

Deep Neural Networks (DNNs) are a class of artificial neural networks that consist of multiple (deep) layers of interconnected nodes, known as neurons or units. DNNs have quickly become the dominant approach for GenAI model architectures because they allow for quick capture of complex input data patterns and generate cohesive content.

Let's take a look at a couple types of architectures.

The Mechanics of a Generative AI Model

Deep Learning Architectures

Generative Adversarial Networks (GANs): Consist of two neural networks—a generator and a discriminator. The generator generates new samples, such as images or text, while the discriminator distinguishes between real and generated samples. Through an adversarial training process, GANs learn to generate content that is increasingly indistinguishable from real data.

The Mechanics of a Generative AI Model

Deep Learning Architectures

Transformer Models:
Originally introduced for natural language processing tasks and have been adapted for GenAI. Transformers use self-attention mechanisms to capture dependencies between input elements. They have been successfully applied to tasks such as language generation, image generation, and music generation. For example, transformer models can analyze words in a sentence to detect relationships between words and learn how they relate.

Foundation Models

Foundation Models: Pre-trained machine learning models that serve as the starting point for GenAI applications. These models, such as a Generative Pre-trained Transformer (GPT), have been trained on vast amounts of data to learn language, images, or other types of information.

Foundation models serve as the base for creating more specialized models. Among these, Large Language Models (LLMs) have garnered significant attention in their text generating capabilities. We will delve deeper into LLMs in the following section.

Large Language Models

LLMs are typically built using Transformer Architectures, trained on huge amounts of text data with a large number of parameters, often in the order of billions. All this allows LLMs to generate coherent text that resembles the patterns and structures present in the training data.

Some examples include:

The GPT-Series, developed by OpenAI

PaLM, developed by Google

LLaMA, developed by Meta

User Interfaces for GenAI Models

Starting in the mid-to-late 2010s, user interfaces for GenAI systems became accessible to the public, thanks to technological advancements and an increasing demand for user-friendly access to these powerful tools.

Frontend applications play a crucial role in enabling users to interact with GenAI models in a user-friendly and intuitive manner. These applications provide a graphical interface or platform that allows users to input prompts, explore generated content, and customize the model's output.

Chapter 1 Conclusion

This section introduced key components of GenAI and explored the evolution of GenAI that has paved the way for the current landscape. It shed light on the ways that this technology and humans have influenced one another on the path toward innovation. GenAI is a powerful tool for the government space to harness and effectively utilize, capable of generating new content including images, videos, text, code, and audio.

Next, we will explore use cases for GenAI's five main capabilities. The foundational understanding, acquired in this chapter, will help illuminate how GenAI can simultaneously impact and be impacted by government and public service contexts.