An Approach to the Generative AI Project Lifecyle

In this article I will present a high level project architecture for building Generative AI projects that could be applied to any project proposed by DeepLearning AI and AWS in their Generative AI with Large Language Models Course
natural-language-processing
deep-learning
Author

Pranath Fernando

Published

July 4, 2023

1 Introduction

Recent advances in AI such as ChatGPT have demonstrated impressive abilities for performing a wide range of tasks previously only done by humans, these are known as Large Language Models (LLM’s). However, there are so many different options and methods for building applications with these models, and these are all new methods. It can seem overhelming and not obvious how to approach building Generative AI applications. In this article, I will present a high level project architecture for building Generative AI projects that could be applied to any project, which has been proposed by DeepLearning AI and AWS in their Generative AI with Large Language Models Course which I recently completed.

2 The Generative AI Project Lifecycle

This framework covers the steps necessary to take your generative AI project from idea to completion. Here is a graphic showing the complete life cycle. Defining the scope as precisely and narrowly as you can is the most crucial step in any project.

The size and architecture of the model have a significant impact on the activities that LLMs can perform. You should think about what function the LLM will have in your specific application. Do you need the model to be highly capable of doing a variety of jobs, such as long-form text production, or is the task much more specialised, such as named entity recognition, such that your model only needs to be proficient in that one area?

Getting very explicit about what your model needs to perform will save you time and, perhaps more importantly, money.

As soon as you’re satisfied and have sufficiently outlined the criteria for your model, you can start developing it. Whether to build a new base model from scratch or use an existing one will be your first choice.

In most cases, you’ll begin with an existing model, yet there are occasional instances when you might need to train a model from scratch.

Once you have your model, the next step is to evaluate its performance and, if extra training is required for your application, carry it out. You may start by experimenting with in-context learning, utilising examples relevant to your goal and use case. This can sometimes be enough to get your model to perform well.

Even with one or two brief inferences, there are still some situations when the model may not perform as well as you require. In these circumstances, you can attempt fine-tuning your model. This is a supervised learning procedure.

As models get more powerful, it is crucial to make sure that when they are deployed, they behave effectively and in a way that is consistent with human preferences.

Reinforcement learning using human input is a different method of fine-tuning that can assist in ensuring that your model behaves properly. Evaluation is a key element of each of these methods.

Additionally, there are several measures and benchmarks that may be used to assess the effectiveness of your model or the degree to which it adheres to your preferences.

Be aware that the development of an application might be quite iterative throughout the adapt and align stage. To achieve the performance you require, you can start by attempting prompt engineering and assessing the results, then employing fine tuning to enhance performance, and last returning and reviewing prompt engineering once more. When your model is ready and well-aligned to fulfil your performance requirements, you may integrate it into your application and deploy it into your infrastructure.

The process of optimising your model for deployment at this point is crucial. By doing this, you can make sure that you’re utilising your computing power to its fullest potential and giving your application’s users the finest possible experience. The final but most crucial stage is to think about any additional infrastructure that will be needed for your LLM application to function properly. The underlying limitations of LLMs, such as their propensity to fabricate information when they don’t know the solution or their poor capacity for complicated reasoning and mathematics, might be challenging to overcome with training alone so is worth considering.

3 Acknowledgements

I’d like to express my thanks to the wonderful Generative AI with Large Language Models Course by DeepLearning.ai and AWS - which i completed, and acknowledge the use of some images and other materials from the course in this article.

Subscribe