Several years ago, the National Debate Rankings provided LDers with a system to compare the results of debaters throughout the season. Since then, few attempts have been made to implement a new ranking system, and those that have tried have not been able to provide a lasting solution.
Earlier this year, I tried to employ a ranking system by taking into account the size of the tournament and the strength of the field in addition to the debater's success. After a few tournaments, the ranking seemed to be unfairly skewed in favor of debaters who participate in the most tournaments rather than those who perform well in a smaller range. For that reason, no reasonably informed debater would agree with the rankings since the premier debaters consistently were ranked fairly low.
I'd like to give the ranking system another go, but I would like to hear input from the community. What do you think should be included in the ranking? Should the size of the tournament and the strength of the tournament hold equal weight? Should the debater's total score (used to rank debaters) be an average or a sum of past performances?
If you have any other comments, I would love to hear them. I will weigh in on some of the issues and update you all as I begin to formulate the method I will use.
I am going to repost my explanation of the old system. Keep in mind that this didn't really work so well, but I would love to incorporate elements of the old system into the new one.
--------
"New Feature: The Circuit Debater Rankings"
A. Background:
Last season, numerous groups attempted to create a comprehensive system for ranking debaters on the Lincoln-Douglas national circuit. However, it appeared that no group was entirely successful in this mission. During this discussion, though, some debaters threw around the idea of a system similar to that implemented in college policy debate.
After toying with the idea, I have decided to create a similar system that has been tweaked for high school LD to rank debaters during the 2008-2009 season.
The Circuit Debater Rankings, referred to as CDR for the rest of this explanation, will be heavily influenced by the Bruschke System used in college policy, as explained earlier. However, because the Bruschke System relies on teams to input data, some steps have been taken to simplify the process in LD so that one person can cover an entire season, but still provide accurate results.
Most importantly, it is worth noting that these rankings should be taken for what they are - nothing more than speculation about where debaters from all over the country rank in relation to one another. Naturally, these are not meant to judge a person's self-worth, ability to debate, or anything else. This ranking system, contrary to popular belief, is intended to provide some method to evaluate the results of debate tournaments and show where debaters stand based on those results.
B. Tournament Score
The fundamental part of CDR is how the system scores debaters for their performance in individual tournaments. The results of tournaments will play a number of roles, as you will read, in calculating the position of debaters when all data is entered. Moreover, it is worth noting that I will try my best to enter the results of every Tournament of Champions qualifier. However, the most tangible limitation is the availability of results packets. In the event that a tournament is not entered, I would appreciate it if either a) you can submit the link to the results packet if it is available or b) you can upload the packet online yourself. Otherwise, since I do not attend every tournament obviously, there is no way to get around this problem.
On to the explantion of tournament scores. I will explain how these scores function in creating the ranking in the next section, but this section will illustrate how debaters earn points in individual tournaments.
First, debaters will be rank ordered based on their performance in the tournament. Obviously, first place will be the champion of finals (except in cases where debaters do not compete in the final round and have co-champions, which, in that case, will use the method explained next), second place will be the loser in finals. From there, debaters will be ranked on each level (semifinalists, then quarterfinalists, and so on) based on 1) preliminary round record, 2) adjusted points, 3) total points. Of course, after we can no longer rank debaters who cleared to elimination rounds, the same process will be used for preliminary rounds - best 4-2s, 3-3s, and so on. After debaters have been ranked in this manner, they will be given a percentile score which we will call the "performance rating." So, Debater X who won a tournament with 120 competitors will receive a "performance rating" of 120/120 or 1.00. Moreover, debaters will earn "performance ratings" for all performances that put them in the top half of the field. Not only does this make the job easier for me when I don't have to rank all 300 debaters in large octofinals tournaments, but is also avoids cruelly placing novices and inexperienced debaters with 1-5 or 0-6 records as the worst on the circuit. In cases where there are an uneven number of competitors (for instance, 41 debaters), we will use the higher near half (so, rank the top 21 instead of the top 19).
Second, after calculating the debater's "performance rating," we will give the tournament a "size rating." Just as debaters are compared to the best debater in the field with the "performance rating," the "size rating" will compare debaters to the biggest tournament of the year. Thus, this rating will change over time as more tournaments occur, assuming that some tournaments at the end of the year are larger than those in the earliest part of the year (which, of course, is true). So, if Tournament X is, hypothetically, the largest in a year wit 250 debaters, Tournament Y, which boasts a field of 125 debaters, will receive a "size rating" of .50. While some may object to size being a factor, I believe it should be because a) it is taken into account when creating bid levels, and b) it demonstrates that a tournament has a number of good debaters in the field or is run well enough to attract debaters. This may be false in a few cases, but usually I think this reasoning is sound.
Third, the tournament will receive a "strength rating." This factor is meant to overcome the artificial high rating that big tournaments receive by measuring how strong a field is, which allows small but solid tournaments to compete with the overgrown and weak ones. Like the other factors, the "strength rating" will be comparative as well to give a percentile score. In order to measure strength, the number of debaters ranked in the top-50 by CDR will be divided by the highest number of top-50 debaters that attended a tournament over the course of the year. Like the size rating, this factor will change as other tournaments are added so will shift a debater's tournament score over the course of the year.
Underview: In regard to the last two ratings, you may be wondering how we calculate and make the ranking for the first few weeks, or even before all tournaments are finished. To do so, I plan to wait to rank after the first two major weeks of tournaments: the weekend of September 5th (Grapevine) and the weekend of September 12th (Greenhill and Wake Forest). After those tournaments, I will make a ranking using only the data from the "performance rating" and the "size rating" since we will know both of those based on performances and whichever tournament has the biggest field. I will then use the top 50 from that list to create the first "strength rating", so will re-rank debaters and publish our first official ranking of the year. Naturally, the results will become more accurate as more tournaments occur and the top-50 becomes more precise.
C. Overall Score
Now that we have discussed how I will calculate the different ratings, I will now explain how these ratings function together.
The tournament score will be calculated by multiplying a debater's "performance rating" x "size of tournament rating" x "strength of tournament rating." Pretty simple.
However, you may now be wondering how many tournaments count or how we figure out the top debaters over the course of the season. This, of course, is another issue that was fiercely debated. I will count the top seven tournament scores that a debater has over the course of the season. While one may argue that debaters who attend fewer may be hurt by this calculation, I don't see anyway to avoid it. If I have to draw the line somewhere, I think this allows the ranking to consider the best performances of top debaters over the course of the season while not excluding good debaters by allowing those rare kids who go to 10+ tournaments to beat the system, so to speak.
I will then compile these top scores and publish the full results on this website in a PDF.
D. Answers to Possible Objections
1) This system shouldn't rank people who don't want to be ranked.
Fair enough. If you would like to be removed from the ranking for whatever reason, please drop us an e-mail and I will happily remove you, no questions asked.
2) This system fosters elitism.
Sorry. Like I said before, the ranking is not meant to be an evaluation of the worth of people or even assess who is the best debater on some subjective scale. I am just trying to take cold hard numbers and assess the performances of debaters over the course of the season.
3) There are problems with the system.
A lot of the debate on the topic of the "best system" has led to irreconcilable positions. I understand that there are other arguments for why a different system would be better, but at some point you have to choose one side or another to create a ranking. I think this does a pretty good job at providing a system that a) considers the strength of the field and b) is based directly on bid levels of tournaments.
Earlier this year, I tried to employ a ranking system by taking into account the size of the tournament and the strength of the field in addition to the debater's success. After a few tournaments, the ranking seemed to be unfairly skewed in favor of debaters who participate in the most tournaments rather than those who perform well in a smaller range. For that reason, no reasonably informed debater would agree with the rankings since the premier debaters consistently were ranked fairly low.
I'd like to give the ranking system another go, but I would like to hear input from the community. What do you think should be included in the ranking? Should the size of the tournament and the strength of the tournament hold equal weight? Should the debater's total score (used to rank debaters) be an average or a sum of past performances?
If you have any other comments, I would love to hear them. I will weigh in on some of the issues and update you all as I begin to formulate the method I will use.
I am going to repost my explanation of the old system. Keep in mind that this didn't really work so well, but I would love to incorporate elements of the old system into the new one.
--------
"New Feature: The Circuit Debater Rankings"
A. Background:
Last season, numerous groups attempted to create a comprehensive system for ranking debaters on the Lincoln-Douglas national circuit. However, it appeared that no group was entirely successful in this mission. During this discussion, though, some debaters threw around the idea of a system similar to that implemented in college policy debate.
After toying with the idea, I have decided to create a similar system that has been tweaked for high school LD to rank debaters during the 2008-2009 season.
The Circuit Debater Rankings, referred to as CDR for the rest of this explanation, will be heavily influenced by the Bruschke System used in college policy, as explained earlier. However, because the Bruschke System relies on teams to input data, some steps have been taken to simplify the process in LD so that one person can cover an entire season, but still provide accurate results.
Most importantly, it is worth noting that these rankings should be taken for what they are - nothing more than speculation about where debaters from all over the country rank in relation to one another. Naturally, these are not meant to judge a person's self-worth, ability to debate, or anything else. This ranking system, contrary to popular belief, is intended to provide some method to evaluate the results of debate tournaments and show where debaters stand based on those results.
B. Tournament Score
The fundamental part of CDR is how the system scores debaters for their performance in individual tournaments. The results of tournaments will play a number of roles, as you will read, in calculating the position of debaters when all data is entered. Moreover, it is worth noting that I will try my best to enter the results of every Tournament of Champions qualifier. However, the most tangible limitation is the availability of results packets. In the event that a tournament is not entered, I would appreciate it if either a) you can submit the link to the results packet if it is available or b) you can upload the packet online yourself. Otherwise, since I do not attend every tournament obviously, there is no way to get around this problem.
On to the explantion of tournament scores. I will explain how these scores function in creating the ranking in the next section, but this section will illustrate how debaters earn points in individual tournaments.
First, debaters will be rank ordered based on their performance in the tournament. Obviously, first place will be the champion of finals (except in cases where debaters do not compete in the final round and have co-champions, which, in that case, will use the method explained next), second place will be the loser in finals. From there, debaters will be ranked on each level (semifinalists, then quarterfinalists, and so on) based on 1) preliminary round record, 2) adjusted points, 3) total points. Of course, after we can no longer rank debaters who cleared to elimination rounds, the same process will be used for preliminary rounds - best 4-2s, 3-3s, and so on. After debaters have been ranked in this manner, they will be given a percentile score which we will call the "performance rating." So, Debater X who won a tournament with 120 competitors will receive a "performance rating" of 120/120 or 1.00. Moreover, debaters will earn "performance ratings" for all performances that put them in the top half of the field. Not only does this make the job easier for me when I don't have to rank all 300 debaters in large octofinals tournaments, but is also avoids cruelly placing novices and inexperienced debaters with 1-5 or 0-6 records as the worst on the circuit. In cases where there are an uneven number of competitors (for instance, 41 debaters), we will use the higher near half (so, rank the top 21 instead of the top 19).
Second, after calculating the debater's "performance rating," we will give the tournament a "size rating." Just as debaters are compared to the best debater in the field with the "performance rating," the "size rating" will compare debaters to the biggest tournament of the year. Thus, this rating will change over time as more tournaments occur, assuming that some tournaments at the end of the year are larger than those in the earliest part of the year (which, of course, is true). So, if Tournament X is, hypothetically, the largest in a year wit 250 debaters, Tournament Y, which boasts a field of 125 debaters, will receive a "size rating" of .50. While some may object to size being a factor, I believe it should be because a) it is taken into account when creating bid levels, and b) it demonstrates that a tournament has a number of good debaters in the field or is run well enough to attract debaters. This may be false in a few cases, but usually I think this reasoning is sound.
Third, the tournament will receive a "strength rating." This factor is meant to overcome the artificial high rating that big tournaments receive by measuring how strong a field is, which allows small but solid tournaments to compete with the overgrown and weak ones. Like the other factors, the "strength rating" will be comparative as well to give a percentile score. In order to measure strength, the number of debaters ranked in the top-50 by CDR will be divided by the highest number of top-50 debaters that attended a tournament over the course of the year. Like the size rating, this factor will change as other tournaments are added so will shift a debater's tournament score over the course of the year.
Underview: In regard to the last two ratings, you may be wondering how we calculate and make the ranking for the first few weeks, or even before all tournaments are finished. To do so, I plan to wait to rank after the first two major weeks of tournaments: the weekend of September 5th (Grapevine) and the weekend of September 12th (Greenhill and Wake Forest). After those tournaments, I will make a ranking using only the data from the "performance rating" and the "size rating" since we will know both of those based on performances and whichever tournament has the biggest field. I will then use the top 50 from that list to create the first "strength rating", so will re-rank debaters and publish our first official ranking of the year. Naturally, the results will become more accurate as more tournaments occur and the top-50 becomes more precise.
C. Overall Score
Now that we have discussed how I will calculate the different ratings, I will now explain how these ratings function together.
The tournament score will be calculated by multiplying a debater's "performance rating" x "size of tournament rating" x "strength of tournament rating." Pretty simple.
However, you may now be wondering how many tournaments count or how we figure out the top debaters over the course of the season. This, of course, is another issue that was fiercely debated. I will count the top seven tournament scores that a debater has over the course of the season. While one may argue that debaters who attend fewer may be hurt by this calculation, I don't see anyway to avoid it. If I have to draw the line somewhere, I think this allows the ranking to consider the best performances of top debaters over the course of the season while not excluding good debaters by allowing those rare kids who go to 10+ tournaments to beat the system, so to speak.
I will then compile these top scores and publish the full results on this website in a PDF.
D. Answers to Possible Objections
1) This system shouldn't rank people who don't want to be ranked.
Fair enough. If you would like to be removed from the ranking for whatever reason, please drop us an e-mail and I will happily remove you, no questions asked.
2) This system fosters elitism.
Sorry. Like I said before, the ranking is not meant to be an evaluation of the worth of people or even assess who is the best debater on some subjective scale. I am just trying to take cold hard numbers and assess the performances of debaters over the course of the season.
3) There are problems with the system.
A lot of the debate on the topic of the "best system" has led to irreconcilable positions. I understand that there are other arguments for why a different system would be better, but at some point you have to choose one side or another to create a ranking. I think this does a pretty good job at providing a system that a) considers the strength of the field and b) is based directly on bid levels of tournaments.