A new language model developed by the non-profit artificial intelligence (AI) research organisation OpenAI is allegedly capable of generating text so convincing that its managerial staff was forced to walk back its commitment to keep the organisation’s findings available to the public (releasing only abridged version), thereby eliciting anger from some corners of the AI community.
Trained on a staggering 40 gigabytes of text gleaned from the Internet, the model – called GPT-2 – can not only predict the next word of a text prompt, but also allow users to “generate realistic and coherent continuations about a topic of their choosing” without losing track of the prompt’s style or content.
OpenAI claims the model constitutes a huge improvement over its predecessor in terms of the length and coherence of the output, which is not hard to see in light of the following block of text generated by the model on the basis of the prompt, “Recycling is good for the world, no, you could not be more wrong”:
“Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources.”
To base its decision, the organisation cited a plethora of negative uses their model could be put to (such as generating fake news with ease or impersonating people on-line), as well as referred to its charter which notes the organisation’s expectation that “safety and security concerns” are likely to reduce “our traditional publishing in the future”.
Some commentators were incensed by the stance, while others praised the organisation for raising the bar of professional ethics in the field. Meanwhile, the organisation itself remains torn on the issue, calling for more discussion among researchers and authorities alike.
The decision is set for a revisit within six months, prior to which OpenAI urged governments to consider “expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems”.