It’s been around for years, as this 2011 article in the New York Times attests.
Determining the number of fake reviews on the Web is difficult. But it is enough of a problem to attract a team of Cornell researchers, who recently published a paper about creating a computer algorithm for detecting fake reviewers. They were instantly approached by a dozen companies, including Amazon, Hilton, TripAdvisor and several specialist travel sites, all of which have a strong interest in limiting the spread of bogus reviews.
“The whole system falls apart if made-up reviews are given the same weight as honest ones,” said one of the researchers, Myle Ott. Among those seeking out Mr. Ott, a 22-year-old Ph.D. candidate in computer science, after the study was published was Google, which asked for his résumé, he said.
I wonder if it’s still good, with 5 years of bullshit evolution to account for. One thing in its favor: it seems to “know” that top reviewers tend to affect a style imitative of travel writing in an effort to sound credible, and doesn’t trigger on their innocuous but very ad-like use of language.