How Power Ratings Are Generated


The power rating is based strictly on two sources of data: the scores of a game and the site (home or away) of that game.  All scores are obtained from the Internet and entered into the program for analysis.  Depending on how the program is set up for a given division or league, games played outside a team's division or league may or may not be included when computing the ratings.

Analyses consist of initially assigning all teams the same rating (or using the final rating from the previous year, if it is available) and modifying that rating, either up or down, as teams play one another.  Winning a game does not necessarily improve a rating.  Rather, winning by an anticipated margin does.  If a high-ranking team barely beats a low-ranked team, then the higher-ranked team will probably lose points, and the low-ranked team will gain an equal number of points (there is a certain "conservation of ratings" -- analogous to the laws of thermodynamics).  As some teams may run up the score, there is a point of diminishing returns such that as score differentials increase they contribute less and less to the final power rating.  Beating a higher-ranked team counts for much more than does beating a lower-ranked team.  Conversely, losing to a lower-ranked team could cost dearly.  The number of games played (i.e., the sample) does not affect an individual team's rating; it does, however, affect the accuracy of results, since any statistical prediction is strongly influenced by the sample size.  It is important to remember that early season power ratings are less predictive and subject to greater fluctuations.  It often takes several weeks into the season before the ratings stabilize.

As one computes the power rating for each team, the ratings in effect change for other teams, so that the ratings must again be recomputed again and again (an iterative process) until the ratings for each team do not change (that is, convergence takes place).  The problem represented by the example included on this page (see below) is essentially equivalent to solving 53 simultaneous equations (non-linear, for you math buffs) with 53 unknowns.  The solution may require more than 100,000 iterations and cannot be done without the use of a computer.

As an example, you ran the program on your own computer by selecting the iteration button below.  A "Java applet" containing the actual code will begin iterating for 250 cycles.  You can see the iteration counter in the status bar at the bottom.  Every time you select iterations, it will continue to calculate another 250 iterations towards a solution.  If you select results, it will display the power ratings and strength of schedule up to that iteration.  However, these results are not accurate unless all values have stopped changing, which takes many thousands of iterations.  In 1998, Princeton was number 1, and as you select iterations a number of times, Princeton will eventually move to the number #1 ranking as "convergence" or a constant solution is finally achieved.

This calculation is actually being performed on YOUR computer and is running a set of Java instructions transmitted through your browser.  The speed of the calculation is governed by the speed of your computer's CPU.  486 computers are slow, whereas Pentium II 400 MHz computers will make the necessary much faster.

Sometimes results appear surprising, if not startling, when compared to the polls.  If, however, one examines the schedule of each team along with its performance and opponents' power ratings, it is generally easy to see why a team achieved a certain power rating.  Beating highly-ranked teams or playing top-ten teams despite losing by one or two goals counts more in this rating than does simply beating lesser teams and perhaps even going undefeated.  Winning is obviously important, but strength of schedule is critical.

Another peculiarity occurs when Team A beats Team B, yet Team A has a lower power rating.  This can happen if Team A did not fare as well in other games, and the rating represents all games played -- not just one.  Head-to-head encounters are treated as just one game and receive no added emphasis.

Criteria or considerations for the power ratings are partially subjective and include such things as:

  1. What is "home field advantage" worth statistically?  How many goals should be given to the home team? 

  2. What is (or what should be) the rate of diminishing return on "running up the score"?  That is, at what point does the increase in the margin of victory really become meaningless?

  3. How do you rate a win in a game involving two high-rated teams compared to a win in a game involving two low-rated teams?

  4. If Team A beats Team B, but four days later loses to Team C because they were not quite as prepared, should the game be weighted the same as one in which they had more time to prepare?

  5. Should games be weighed more heavily as the season progresses and, if so, what weighting factors should be employed?

Here by example is a loose interpretation of the power ratings.  Assume the following:

Team A has a power rating = 24.5

Team B has a power rating = 12.6

If Team A plays Team B on a neutral field, Team A would be expected to beat Team B by 24.5 - 12.6 or approximately 12 goals.  This, however, is a statistical result, which means that if the sample was large enough -- where Team A played Team B 1000 times -- the average goal difference would be about +12 in favor of Team A.  Certainly the number will vary for each game, including games where Team B beats Team A, but on average, to the extent that the power ratings are accurate and the sample is large, the goal differential should hold.

In addition to the power ratings, the LaxPower pages report the strength of schedule.  This statistic is simply computed by averaging each team's opponents' power ratings.  The logic is simple and straightforward: If you play more highly rated teams, your strength of schedule should of course be higher.

LaxPower's predictions of the probability of teams winning the NCAA tournaments are based on the power ratings and a random number generator that plays out the field 64,000 times and keeps track of the winning team each time.  If a team wins 32,000 of the simulations, then there is a 50% probability that that team will win the tournament -- provided, as always, that the power ratings are accurate.






Home High School Home Scoreboard National Rankings LCS Notes
LaxPower Forum Lacrosse Links Newsletter Photogallery Video Clips
FAQs Computing the PR PR vs. Sagarin PR Accuracy Lacrosse Trivia
Ask the Ref/Ump You Make the Call Report Scores Your Hosts E-Mail Us




Dr. Laurence Feldman, Executive Director
Dr. Robert Kroshefsky, Director of Men's Lacrosse
Dr. Daniel Larsen, Director of Women's Lacrosse

Millennium (Y2K) Compliant

Copyright © LaxPower, 1997-2000