A new AI language model generates poetry and prose

A new AI language model generates poetry and prose(prōz)

GPT-3 can be eerily(ˈirilē) human-like—for better and for worse

The sec said, “Musk,/your tweets are a blight./They really could cost you your job,/if you don’t stop/all this tweetingat night.”/…Then Musk cried, “Why?/The tweets I wrote are not mean,/I don’t use all-caps/and I’m sure that my tweets are clean.”/“But your tweets can move markets/and that’s why we’re sore(sôr)./You may be a genius(ˈjēnyəs)/and a billionaire(ˌbilyəˈner),/but that doesn’t give you the right to be a bore(bôr)!”

The preceding(prəˈsēdiNG) lines—describing Tesla and SpaceX founder Elon(ē) Musk’s run-ins with the Securities and Exchange Commission(kəˈmiSHən), an American financial regulator(ˈreɡyəˌlādər)—are not the product of some aspiring(əˈspī(ə)riNG) 21st-century Dr Seuss. They come from a poem written by a computer running a piece(pēs) of software called Generative Pre-Trained Transformer 3. gpt-3, as it is more commonly known, was developed by Openai, an artificial(ˌärdəˈfiSHəl)-intelligence (ai) laboratory(ˈlabrəˌtôrē) based in San Francisco, and which Mr Musk helped found. It represents the latest advance in one of the most studied areas of ai: giving computers the ability to generate sophisticated(səˈfistəˌkādəd), human-like text.

The software is built on the idea of a “language model”. This aims to represent a language statistically(stəˈtistik(ə)lē), mapping the probability with which words follow other words—for instance, how often “red” is followed by “rose”. The same sort of analysis(əˈnaləsəs) can be performed on sentences, or even entire(ənˈtī(ə)r) paragraphs. Such a model can then be given a prompt(präm(p)t)—“a poem about red roses in the style of Sylvia Plath”, say—and it will dig through its set of statistical relationships to come up with some text that matches the description.