Restaurant Rankings vs. Ratings
This idea seems so obvious to me that I’m sure someone has tried it, but I can’t find any good examples. In a nutshell, it’s an app for user ratings of restaurants that’s based on ranking them in categories, rather than rating them individually on a star or point scale.
So for example, you might have your personal lists of the best burgers, the best Italian food, the best steak, the best vegetarian restaurants, etc. You’d only rank categories where you’ve tried at least two restaurants, of course, and when you try a new one you could add it to your list. So, for example, my list for burgers in NYC might start as follows:
Actually, what I’d really have is a ranking of the best burgers I’ve had anywhere, and we could still filter them down to NYC if necessary. But you get the idea.
Now, once we’ve got enough users, we can start aggregating those rankings to present a single ranked list in each category. And this has some major advantages over existing ratings sites:
- It’s closer to how people already think about restaurants. Everyone has running debates with their friends about their top five burgers, BBQ, or whatever else. Everyone asks “is it better than ___?” No one asks “is it three stars or three and a half?” But sites like Yelp and TripAdvisor ask their users to convert those relative opinions into absolute ones, only to aggregate them back into relative lists (“10 Best Mexican Restaurants in Los Angeles”) — and a lot of useful information is lost in that process.
- Relative ratings are more useful than absolute ones in this context, because they remove a lot of ambiguity. Is a three-star rating for a Mexican restaurant in New York really the same as a three-star rating for one in California, where the standard for Mexican food is higher? It probably depends on the user. But this way anyone who’s had Mexican food in both places is ranking every restaurant on the same list. (In fact, if you wanted to convert the relative rankings back into star ratings, you’d be more accurate than if users just entered star ratings to begin with — but again, why would you?)
- It’s faster, more fun and more social than ratings — which means you’d get a different and broader pool of reviewers. Rankings feel like much more of a “curation” experience, more of an expression of your individual identity or tastes. And they still allow for a certain “power user” dynamic without allowing them to be quite as dominant.
- It doesn’t emphasize rants and raves. Think about how many reviews on Yelp are motivated more by anger or affection for the establishment than any pure desire to inform or help the reader. It means that even with the power users, you only get their input on the best and worst restaurants — these reviews are not a very sensitive instrument for distinguishing one three-star restaurant from another.
This would also be a relatively easy project to get off the ground; you could start with a hundred restaurants in 5-10 categories in a single big city, and a relatively small user base. Even a couple hundred frequent restaurant-goers would provide meaningful data. Then you publish those lists on your site, spend a little money on advertising/promotion and see if anyone cares about them.
I think a lot of people would. There’s a certain appeal to the crowdsourcing and mystery-shopping aspects — which the old Zagat guides really tapped into — but the information would also be more reliable and actionable than anything else available. If a new Szechuan restaurant (my current obsession) opened in New York, I’d honestly be more interested in where it landed in this app’s ranking after a month or two — even if that meant only 5-10 people had visited and ranked it — than in a review in the NY Times or on some food blog.
The algorithm would take a little work, but it also doesn’t have to be as complicated with a small user base in a single city as it would be later — I imagine that cross-city comparisons and aging of reviews would both add some complexity down the road. But that would be a nice problem to have.
Is there a business model? The (alleged) Yelp approach of extorting restaurants to buy overpriced advertising so you’ll remove/underweight their bad reviews doesn’t really work. It would be pretty unpleasant for a restaurant to be ranked last in their category, but unless you’re actively trying to exploit that, you’d probably be showing just the top ten anyway in the aggregate lists — who cares about last place vs. next-to-last?
Could you charge users instead, making this an exclusive/insider thing where you need an existing user’s invitation to join, and pay a small monthly fee to participate? Maybe you could still display your own lists to non-members, but you couldn’t show them the aggregate rankings. (You might waive the fee for any month where someone ranks at least x new restaurants, though you don’t want to give people an incentive to lie and rank a restaurant they haven’t actually visited.)
If you wanted a larger user base, another option is for your ratings to be a loss leader for reservations (looks like Opentable has an affiliate program) or some other commission product, as they are at TripAdvisor and most online travel portals. Even as a completely free service that just plugs into major ad networks, you’d probably at least pay the bills while building up a valuable database.
Now, what else would it work for besides restaurants? It’s hard to think of anything, and maybe that’s why no one has tried this. The startup machine is very good at applying cookie-cutter business models to different categories (so for example, I’m sure there have been a few “Rotten Tomatoes for restaurants” attempts) but ratings are an area where no two categories really work the same way, and most of the other obvious ones have less of a clear advantage for rankings over ratings. There are too many variables on which to rate a hotel, and it would be hard to categorize them. Same with music, books and movies. And with consumer products, most people don’t try enough different ones to rank them; could you tell me your top five printers, cars or vacuum cleaners with the same confidence as your top five pizza places? Better to rely on professional reviews there.
By contrast, there’s one clear variable on which to rate restaurants — quality of the food — and 90% of them fall neatly into a type of cuisine. You don’t need 100% coverage either; if you leave out some fusion or vague “New American” restaurants because they don’t fit into a clear category, so what? There are still a million other review sites that are all over them. And while it’s true that people rate restaurants on other things like service, atmosphere, delivery time, etc, that’s often just as much of a bug as a feature, because sensitivity to these things varies widely and it corrupts the ratings. You could still have people enter text comments on these rankings, and excerpt them like Zagat reviews for the master lists — but they would be more like the brief comments on Instagram posts or FourSquare check-ins, and less like the long, performative stories that seem to rise to the top among Yelp reviews.
The main point is that people care about rankings much more viscerally than ratings. It’s why clickbait sites publish so many arbitrary ranked lists; they know they’ll get lots of comments and shares just from people arguing with them. And I guarantee you some NYC readers have not even made it this far because they were so disgusted by my top five burger choices above. But unlike most exercises in list-making, with restaurants there’s a lot of useful, structured information embedded there. Why not take advantage of it?