What is Generative AI, the technology behind OpenAI’s ChatGPT?
Reuters: Generative artificial intelligence has become a buzzword this year, capturing the public’s fancy and sparking a rush among Microsoft and Alphabet to launch products with technology they believe will change the nature of work. Here is everything you need to know about this technology.
What is generative AI?
Like other forms of artificial intel
ligence, generative AI learns how to take actions from past data. It creates brand new content – a text, an image, even computer code – based on that training, instead of simply categorizing or identifying data like other AI.
The most famous generative AI application is ChatGPT, a chatbot that Microsoft-backed OpenAI released late last year. The AI powering it is known as a large language model because it takes in a text prompt and from that writes a human-like response. GPT-4, a newer model that OpenAI announced this week, is “multimodal” because it can perceive not only text but images as well. OpenAI’s president demonstrated yesterday how it could take a photo of a hand-drawn mock-up for a website he wanted to build, and from that generate a real one.
What is it good for?
Demonstrations aside, businesses are already putting generative AI to work.
The technology is helpful for creating a first draft of marketing copy, for instance, though it may require cleanup because it is not perfect. One example is from CarMax Inc, which has used a version of OpenAI’s technology to summarize thousands of customer reviews and help shoppers decide what used car to buy.
Generative AI likewise can take notes during a virtual meeting. It can draft and personalize emails, and it can create slide presentations. Microsoft Corp and Alphabet Inc’s Google each demonstrated these features in product announcements this week.
What is wrong with that?
Nothing, although there is concern about the technology’s potential abuse.
School systems have fretted about students turning in AI-drafted essays, undermining the hard work required for them to learn. Cybersecurity researchers have also expressed concern that generative AI could allow bad actors, even governments, to produce far more disinformation than before. At the same time, the technology itself is prone to making mistakes.