OpenAI Research Lab has released a full version of the AI system, which is used to automatically generate text, but experts warn that the system could be used for malicious purposes. The agency initially released the GPT-2 program in February, but refused to release the full version of the program because it feared that the full version of the program could be used to spread fake news, spam and false information.
Since then, it has released a smaller, simpler version of GPT-s, and studied how the outside world reacted to it. OpenAI said in a blog post this week that there was no evidence of apparent abuse in the GPT-version and that it had released a full model of the system. But security experts warn that GPT-2 can write fake news articles, stories, themes and code.
GPT-2 is part of a new generation of text generation systems that impress experts with the ability to generate coherent text from minimal hints. The system received 8 million text documents collected online and responded to user-provided text fragments. For example, give it a false title, and it will write a news story;
At present, it is difficult to accurately evaluate the quality of GPT-2 text output, but this model often produces incredibly persuasive text that often presents intelligent features, but its limitations become apparent if the system is used over a long period of time, and it is particularly challenged by long-term coherence, for example, Consistently use the character’s name and attributes in your story, or stick to a single topic in a news article.
The best way to feel THE GPT-2 capability is to try it out for yourself. You can visit TalkToTransformer.com to try out the Web version of GPT-2 for yourself.