

Discover more from Asia Sentinel
Do we need an online lie detector?
"I've been browsing online more than three hours today, yet I never found any interesting article like yours. It is pretty value-sufficient for me. If all website owners and bloggers made such good content material available, the net will be a lot more helpful."
That is the sort of message from which you might take comfort, but it also shows how people depend increasingly on the internet – perhaps sometimes when they shouldn’t. People selling goods and services look to online reviews as essential sales tools and integral elements of marketing campaigns as consumers turn to online services to shop around for a gadget, book into a hotel or a restaurant or to deal in shares.
Yet it doesn’t take long for newcomers to some websites to notice that everything they hit upon is supposedly just perfect for them: any gadget is a gem that must be had, every hotel seems better than the Ritz, every restaurant serves table-slamming good dishes, and every new stock tip is a market beater. Yet online shoppers often lack reliable tools to gauge untried or new goods and, lacking them, have to depend on reviews.
So how can we trust some of the overhyped product postings that have become commonplace in the online review system? Well, help appears to be on the way.
A team of computer science researchers from Cornell University in the United States has recently released a paper on a computer-based algorithm that is designed to detect fake reviews and separate them from genuine ones. It boasts a respectable 90 percent accuracy. What this algorithm does is to run a sophisticated but automated analysis based of the posted text.
The Cornell researchers click on "deceptive opinion spam" as they winkle out bogus reviews, which are usually narratives relating to an experience - say dining at a restaurant - that is heavy on superlatives but short on descriptive elements.
The "lie-detecting" algorithm will pick up amongst the text what are called "slight deceptive indicators" such as the frequent use of first-person singular terms - "I" and "me" and "my" - as the reviewers seek to establish credibility for products and services rather than offer descriptive words about an experience.
There is also the overuse of adverbs like "very" and "really," excessive assertions with exclamation marks to push positive emotions, and then there is frequent usage of verbs as if to hide some guilt.
This new program should be great news for online retailers such as Amazon and TripAdvisor as well as for hotel chains and specialist travel websites that count on positive and truthful opinion presented in reviews to attract customers.
One of the Cornell research team it's worth noting, is supposedly on Google's radar for recruitment.
The Hong Kong Securities and Futures Commission offers guidelines on regulated activities, including dealing and advising on securities. These require a license. Yet bloggers on securities generally don’t get into trouble about lacking a license because what they have to say is not perceived as conducting regulated activities.
The bloggers usually protect themselves with disclaimers to stress that the views being expressed are personal opinions and the contents are not geared to advise or to sell shares. But what about the "reviews" supposedly that occasionally creep into these blogs?
Even if an online lie detector like the Cornell algorithm can sniff out suspicious reviews, in the case of the such blog responses we can be facing a mystery about who authored what is perhaps presented as feedback and the motivation for offering it.
Is this something regulators should be looking at closely? Perhaps. Meantime, individuals act at their peril - or, in other words - invest at their own risk.
(Vanson Soo runs an independent business intelligence practice specialized in the Greater China region. This column also appears in The Standard of Hong Kong. Email: soovans@gmail.com)