Frequently Asked Questions on Power Ratings:

Without going into mathematical details, what are the factors contributing to a team's rating.

For college, there is only one factor and that is:

(1) Goal Margin of Victory (with a ten-goal ceiling)

For high school, there are four factors

(1) Goal Margin of Victory (with a ten-goal ceiling)
(2) Won-Loss Record
(3) Correction Factor for winning/losing to a higher/lower rated team.
(4) Winning the State/Class/League Championship

Let's examine how these four factors work.

(1) Margin of Victory

The overwhelming component of these power ratings is the goal margin of victory.
The difference in strength between two teams is defined by the difference in their power
ratings. If these two teams met on a neutral playing field (no home-team advantage) the
computer program predicts the stronger team would win and by a goal margin equal to the
difference in their power ratings. Now since running-up-the-score is unsportsmanlike, the
program limits the goal margin of victory to 10 goals so that there is no increase in a
teams computed rating if it wins by greater than 10 goals.

So when scores are fed into the program for analysis, if a team wins 22-7, the score
is read in as 17-7. That means that when the program considers this game score, if the
actual scores was 18-7, 22-7 or 34-7, they all produce the resulting score of 17-7 and the
resulting ratings are the same for these last three scores. We still compute the predicted
goal difference and print it on the team page, e.g., 6, 14, 25 goals, but no gain or loss is
incurred after 10 goals and no team is punished for not running-up-the-score.

The amount of gain or loss a team will experience is directly related to the difference
in power ratings between the two opponents and the goal margin of victory. That is, if two
teams play each other and their power ratings differ by 7 goals compared with 2 goals,
their is greater potential to move up or down in the former case than the latter. The same
holds true for the game score. If the actual goal difference is 8 goals versus 2 goals,
more movement in the power ratings will occur in the former than the latter.

The greatest opportunity to gain points is to play tougher teams with a higher power rating and
if victory is unavoidable, at least limit the goal margin of victory.

(2) Win-Loss Record

This component takes the win-loss percentage and multiplied by a weighting factor
adds it to the other three components. Thus winning will improve a teams won-loss record whereas
losing will reduce a teams won-lost record.

(3) Correction Factor

When an underdog defeats a favorite, the underdog earns correction points. When a favorite
loses to an underdog, the favorite loses correction points. When a favorite defeats an underdog,
lesser points are earned and when an underdog loses to a favorite lesser points are lossed.
A favorite is defined as a team with a higher power rating than it's opponent.
An underdog is defined as a team with a lower power rating than it's opponent.
The home-field advantage combined with the power ratings of the two teams determines
the prediction or expected magniture of victory. The greater the difference in the prediction,
e.g., 1,2,3 ,..., 10 goals, the greater the correction factor.

(4) Championship Bonus Points

The tournament champion of a state, class or certain leagues receives a bonus for winning
that championship. There is no guarantee that the tournament champion will have the
highest power rating because it is the entire season that counts and not just the post season
tournament.


Why don't the ratings take into account head-to-head games? They do in so far as a team
gets credit for a win versus a loss and possibly an improved rating for goal margins as well
as correction points or championship bonus points. However, in most cases it is not enough for
one team to overtake another team as it is just one game. The other games on a team's schedule
cannot be simply ignored or weighed differently.

If one argues that we beat a team therefore we should be rated higher, than apply this
rule to every team and any team they lose to should automatically have a higher rating. So
in the extreme case, a team that goes 10-1 loses to a team that goes 5-5. The 5-5 team
now has a power rating higher than the 10-1 team but wait. The 5 teams that the
5-5 team lost to have a higher rating than the 5-5 team but the 10-1 team beat most of these
teams!

Head-to-head games between two teams is relevant in the power rating calculation only insofar
as it effects the four components above. All games a team plays are weighed equally and no games
are singled out for special attention. In summary, if head-to-head was considered, how would one
treat the situation where team A beats team B who beats team C who beats team A.

Why did our ratings go down when we just beat a team? The movement of the power ratings
is dependent on all teams and all games and the impact of other games effects the ratings
of teams that have not even played! The strength of opponents is constantly re-evaluated
and the game predictions are re-computed with every update. The method that Laxpower uses is
described as a "predictor-corrector" which means with every new piece of information (a new game
played), all calculations are re-done (iteration) and the results get better (converge). Initially,
the ratings fluctuate significantly but as the season winds down, the ratings do not change a
whole lot.

How can you rate a team with a weak schedule higher than a team with more difficult
schedule?
All power ratings are based on the strength of the opponent, the home-field
advantage and the goal margin. If you play a weak schedule and win by large margins, than your
ratings will go up. If you play a tough schedule and are defeated by large margins your ratings
will go down. The strength of schedule is not treated as an 'explicit' separate component to the
formula as is the case with the Ratings Percentage Index (RPI), but rather it is treated as an
'implicit' factor in the goal margin of victory calculation and tied to the actual game score.

Are your ratings predjudice against ....? The In-region (Reg In) ratings are driven by
a computer program and game scores. National (Norm) ratings use 'Regional Offset Margins' (ROMs)
which are calculated by several different computer analysis techniques. Human intervention is
kept to a minimum and used sparingly, if at all.

How do I know which games hurt our power ratings and which game boosted our power
ratings?
Go to your team's page and click on the 'rating analysis' tab. If you look
under the "+/-" column you will see games where your team played above their power rating
('+') and where they played below their power rating ('-'). The more pluses or minuses the
greater the disparity from their power rating. Also if you examine under the Err-L column,
there is a numerical value placed on the game performance. The Err-l value states that the
difference in power ratings between your opponent and your team + home-field advantage +
score margin should be = 0. It however is not and the value is an indication of how much
a team played under or over its power rating. The sum of all Err-l should add up to zero
which means the program averages all errors and moves the power rating up or down until
you get an exact zero for Err-l. Now though with the ten goal limit, Err-l will not sum
exactly to zero but will have a value attributed to the 10 goal modification.

How accurate are the ratings? There is no way to fully measure the accuracy of the ratings
because there is no yardstick to measure power rating results against, other than personal opinion.
It could be argued that the ratings segregate the good teams from the bad in clusters or groups
such as teams in the top 100 are all solid teams and the teams in the bottom 100 are all weaker
teams. However because of the limited number of games and a few goals here and a few goals there,
a team rated 330 could actually be better than a team rated 320. We receive complaints about
one team beating another team and yet the winning team is rated lower than the losing team so
there must be something wrong with the method. First, the method provides information that both
teams are ranked in the 300 range out of 3500 teams, but may have a margin of error of 10 or
more ranking places. Once again, the actual margin of error in ranking places is undetermined
because there is no way to actually perform a more specific error analysis. Second, no system
can fully account for the "upset" and even the more subjective rankings will often only reward
the winner and penalize the higher ranked losing team rather than replace it with the lower ranked
team. The subjective and the quantitative results are similar in that both predict that in the majority
of cases the higher ranked team would win.

How accurate are the game-to-game predictions by the power ratings?The power ratings can
predict the outcome of high school and college games based on the power ratings of the two opponents
and the home-field ad advantage factor (home, away, neutral). The predictions are constantly
updated with each and every game played and improve in accuracy as the season progresses.
This is because the strength of teams are evaluated more accurately based on more data.
To compute the outcome of a game, simply subtract one team's power rating from another and
add the home-field advantage (1 to 2 goals) to the home team. For example, if team A with a
power rating of 97.0 plays team B with a power rating of 89.0 and the game is played at team B's
home field, then the computer ratings predict A will beat B by (97-89) - 2.0 or 6 goals. Over the
course of a season the average error in predicting the outcome is about 3 goals. That means,
on average the actual game score compared with the predicted game score will be off by
3 goals. See accuracy for further details. Note, the predictive nature of the power ratings
is not a measure of how accurate the computer technique correctly lists the
rankings of teams as discusses in the above paragraph.

Where can I find a complete list of the explanation files? The complete list is found here:
Power Ratings Accuracy
Power Ratings Explanation (for college)
Coaches-Computer Rating (CCR)
Correction Factor
External Games
Power Ratings Explanation (for high school)
REG IN/OUT Games
Interdivision Games
Loss Factor
New description (2009) of Power Rating (19 components)
College Poll
Margin of Victory (MOV) component
Quality Win Factor
Regional Offset Margins (ROM)
Rating Percentage Index (RPI)
Frequently Asked Questions on the Power Ratings
Strength of Schedule (SOS)
Ten Goal Limit
Tournament Selection Index (TSI)
Won-Lost Factor

Why are the ratings so complicated? If all teams played identical schedules, then ratings
would simply be based on won-loss records. However, because all teams play different schedules
the problem becomes one of evaluating the strength of opponents so that schedules can be adjusted
for degree of difficulty. There is no easy way to do this and you wind up with a tradeoff between a
simple to explain and not very accurate algorithm or a very complex algorithm that is difficult to
explain.


Create a free lacrosse website