CanobieFan
Strata Poster
Since everyone else is doing it, I've got 19 of the 25? And all the parks but Kolmården and Hansa
The thing to remember with this as well is that this isn't a way of determining which coasters are the best, but simply a way of going ''here's a sample of X number of people, and these are their favourites". So of course, if X is large and only a small number of people have ridden certain rides, their ranking will be lower.
I don't know how the Mitch Hawker poll did the maths, but I assume they did something to combat that? Which I would guess is why someyisom it ended up providing questionable results?
There's probably a way to do it so that you can include people's top 20/25/whatever too, but that would add complications since those who do that likely have higher counts (and you could then get an "experienced" bias). But again, there's no doubt a way to include it, though you're then stepping into complicated equation territory..
Mitch Hawker Poll actually operated in the inverse. The base calculation that was used is called a pairwise comparison, which compares the ranking of each roller coaster within a ballot, calculating wins and losses of rank.I don't know how the Mitch Hawker poll did the maths, but I assume they did something to combat that? Which I would guess is why someyisom it ended up providing questionable results?
There's probably a way to do it so that you can include people's top 20/25/whatever too, but that would add complications since those who do that likely have higher counts (and you could then get an "experienced" bias). But again, there's no doubt a way to include it, though you're then stepping into complicated equation territory..
It is very true: T Express, Eejainaka, and a number of other Asian coasters get pulverized when you consider ridership numbers. It is very difficult to statistically rectify a roller coaster with 1 rider who ranked it #1 versus a roller coaster ranked #1-3 by 15 people. When is that CF Live Asia again so we can boost ridership numbers?
The way our methodology worked out and the caliber of the data we have - we are assuming top rated roller coasters are those with high rankings and high ridership numbers.
Mitch Hawker Poll actually operated in the inverse. The base calculation that was used is called a pairwise comparison, which compares the ranking of each roller coaster within a ballot, calculating wins and losses of rank.
This is to say, a roller coaster ranked #1 in a 10 coaster ballot would be awarded 9 wins (not including itself), while a roller coaster ranked #2 would have 8 wins and 1 loss, etc. This is why rankings were rendered as pictured below, with W/L categories for each ride:
View attachment 4290
Here's a full detail of results from the 2013 steel coaster poll.
Since Mitch Hawker did not have a cap to number of roller coasters that could be entered in the poll - enthusiasts could enter all roller coasters ridden, this gave enthusiasts with larger roller coaster counts greater sway in the poll, since a roller coaster ranked last in their ballot would face a large number of losses than the same roller coaster being ranked last in a lesser ridden enthusiast with a smaller ballot. So, counting straight wins and losses still runs the risk of favoritism of few enthusiasts with very large ridership numbers. It's still a better method than counting favorites-only or going with the regional point award system Golden Tickets uses, but not bullet proof. (same as our rankings )