AI Is Not AI
From ChatGPT to the future of deeplearning Ops
The most talked about topic I’ve seen on Twitter lately is ChatGPT, and many people are raving about it. For a creator to use AI in their creative process to quickly prototype their work is certainly a great thing in the business world. chatGPT can add inspiration to screenwriters, give problem solvers ideas, and give translators help with more accurate language translations. So does ChatGPT count as AI in the true sense of the word, i.e. does it pass the Turing test. Although we don’t know the answer, we can be sure that ChatGPT must have collected huge amount of Internet data for model training, and the parameters of the final model are amazingly huge.
Referring first to the official OpenAI website, let’s look at the process of training an OpenAI model.
The training model uses Reinforcement Learning From Human Feedback, first OpenAI takes samples from the word database, the labeler will explain the expected answer of the question, then the samples and the expected answer will be supervised learning by GPT-3.5 model (A model), then the comparison data will be collected and the multiple answers obtained from the question will be sorted and scored by manual way. Then, we train the reward model (B model) with this, and finally input new words to A model, whose output goes into B model to get the reward score, and use PPO method to optimize the parameters of A model.
Let’s look at OpenAI’s DALL-E 2, which generates photo-quality images from linguistic descriptions, and of course the open source version of Stable Diffusion v2, which does something similar. The machine learning model used is diffusion models, which connects linguistic descriptions to generate a complete high-resolution image from nothing, which may be of great help to creators in 2D image processing and 3D modeling.
Inspired by the DALL-E tool, Baker Lab researchers use diffusion models to generate new protein structures, A diffusion model for protein design, through the new method researchers can quickly assemble functional protein structures, which need to be tested by tens of thousands of molecules before using diffusion models. From text generation, to text image generation, to protein structure generation, which covers generic model design, so can these models be called generic AI, without talking about creativity, we can understand chatGPT as a narrow sense generic AI, because its goal is to imitate humans and give answers as close to humans as possible, such as an intelligent search engine, an open intelligent database, then whether it can pass the Turing test, I guess it is impossible.
With so many inspiring applications, we imagine building a DeepL-Ops domain. To ensure the stability of the production environment, we will have a chaos agent who keeps doing damage, such as shutting down certain services of the cluster, extremely consuming the network traffic of the server, pretending to destroy the hard disk, after which if the system can still work properly externally, then the system is reliable. We call this work chaos engineering. Here a problem arises, engineers can not 100% to enumerate which problems will lead to service abnormalities, need to constantly rely on practical experience to accumulate, so we also introduced DeepL-Ops training mechanism. There is a chaos agent and an order agent in the system, the former is to make the system failure, the latter is to repair the system so that the system works properly, we designed a machine learning model, when the chaos agent to make damage, order agent to try to repair the system, while the chaos agent is also evolving, as far as possible If the order agent cannot repair the system, the agent will adjust the order rules generator model according to the feedback given by the environment. When the model is finished trained, we put the order agent into the system, and when the system fails, we input the failure information into the order rules generator model, so that the order agent can fix the system problem.