The deep learning network called GPT-2 has recently gained notoriety for producing an entire, plausible article based on a few tips. AI researcher Gary Marcus then tested it, typing a sentence “What happens when you stack kindling and logs in a hotel and then drop some matches is a you typically start a…” If it’s smart enough, the text behind it is obviously “fire”, but GPT-2 The answer is “ick”.
Marcus is not surprised that common sense reasoning is a major problem in the field of AI, which has plagued AI researchers for decades. He posted the results on Twitter, adding an acronym for “Laughing Dead” (LMAO). GPT-2 may have an impressive ability to imitate language, but it lacks basic common sense. Yejin Choi, an AI researcher at the University of Washington, saw Marcus’s post a few minutes later, and she was about to give a speech about using GPT-2’s system, COMET , to perform common sense reasoning. She entered the sentences written by Marcus into COMET, which produced 10 inferences, the first two of which were fire-related. COMET envisions common sense reasoning as a process of having a credible but imperfect response to new inputs, rather than making unassailable inferences by looking at a vast database like encyclopedias. It attempts to blend two methods of great difference savey on top of amy.