US Portsmouth v2

JimC

Not actually an anarchist.
8,248
1,193
South East England
The results for each club give figures for each class, whether they sail one or fifty of them. The number is roughly speaking an average of about the top two thirds of the fleet, but how it actually works is that results are discarded from the calculation if the boat hasn't achieved a reasonable performance or if there were too few boats in total in the race. There's then a calculation to equalise results from different clubs. So the end result is a number that represents that average across all those clubs, together with the number of boats and number of races that have made up that calculation.

So there's a human judgement in looking at the number of boats and races and the amount of variation over the previous 3 years and between clubs to decide whether the number is soundly based enough to publish. It doesn't really matter if there are 100 results and 20 boats from 5 clubs or 100 results and 20 boats from 20 clubs.

I don't know what plans there are for the US data. Its not something that directly concerned me. There was enough data to consider publishing a number for a US class that we don't really see in the UK, but decided not to in the end. So that's the public numbers.

However there's more to it than that. PYonline also generates a report for an individual club's results, and its possible for several clubs to link their data so they can produce calculations for not only their club, but also for that collection of clubs. So that's something I would think US clubs would wish to do. What you can get for each class is a sort of report come certificate, which has the result and a confidence factor.
 

Attachments

  • pyonline report.pdf
    60.7 KB · Views: 16

Tcatman

Super Anarchist
1,572
162
Chesapeake Bay
There is no getting away from the performance of the sailor impacting the rating if the rating system uses past performance and the class is small, or a single boat.

I'm familiar with how several of the persons previously involved in rating Ches-PHRF and that experience implied of lot of prejudice based upon the personal experience of the persons assigning the rating. Never in those conversations did anyone indicate that they ran large statistical analysis of various boats against other boats and had an overall system to ingest boat measurements and past performance and spit out adjusted ratings on a year to year basis.

After I learn more from PY and DPN I'll look into doing more but I feel like I need to gain more experience and improve my statistical skills without purchasing something like SPSS or STATA, yet...
PHRF starts from the measurements and design. It pays attention to one design performance and eyeballs on the one off boats in question. Prejudice based on experience is quite different from bias. My hunch is that you want to avoid at all costs that the rating you are using is the Foredeck Shuffle /XYZ Club rating. I don't think you can avoid this reponsibility as the handicaper! Have at the stats... of course... you face the age old... garbage in... garbage out. That is why Dixie porsmouth is stuck.. the system can't work with non fleet data as input.
 

Tcatman

Super Anarchist
1,572
162
Chesapeake Bay
The results for each club give figures for each class, whether they sail one or fifty of them. The number is roughly speaking an average of about the top two thirds of the fleet, but how it actually works is that results are discarded from the calculation if the boat hasn't achieved a reasonable performance or if there were too few boats in total in the race. There's then a calculation to equalise results from different clubs. So the end result is a number that represents that average across all those clubs, together with the number of boats and number of races that have made up that calculation.

So there's a human judgement in looking at the number of boats and races and the amount of variation over the previous 3 years and between clubs to decide whether the number is soundly based enough to publish. It doesn't really matter if there are 100 results and 20 boats from 5 clubs or 100 results and 20 boats from 20 clubs.

I don't know what plans there are for the US data. Its not something that directly concerned me. There was enough data to consider publishing a number for a US class that we don't really see in the UK, but decided not to in the end. So that's the public numbers.

However there's more to it than that. PYonline also generates a report for an individual club's results, and its possible for several clubs to link their data so they can produce calculations for not only their club, but also for that collection of clubs. So that's something I would think US clubs would wish to do. What you can get for each class is a sort of report come certificate, which has the result and a confidence factor.
I think that people hope that rya will give them a rating that they can then interpolate into the old and frozen dixie pn table.... This has no integrity.. but probably a lot of support from club racers who hate change and moving to the RYA table just seems wrong to them.

I would like to know the reason for not including a us class into the rya table and can only assume that politics was important.. Politics should be important because you need a great deal of buy in to make expectations manageable. This is not a trivial factor.
Bottom line... Not a great outcome so far. Fordeck Shuffle 's announced plans are an example of having cake and eating it to.
 

Tcatman

Super Anarchist
1,572
162
Chesapeake Bay
PY are numbers the PY system suggests to clubs. If there isn't an established number, it is up to the club to decide one, assuming it accepts the boat for racing.
Suggests is the weasel word here... I can't see a club telling a sailor that the published rating in the table that is suggested is wrong and they are changing it on their club authority. Back back in the day the US multi's used NAMSA that was the proprietary property of its creator. Herb would get new results... run the program and generate a new table. He suggested this version was more accurate... you decide. This did not work for the sailors My club decided to freeze the namsa table at the beginning of the season... and then we move to Dixie Portsmouth. So... The club WANTS that responsibility shifted to the national authority.... Moreover, clubs and sailors want the governing authority to speak authoritatively. Suggestions are not helpful. Suggestions from the national authority are more like 10 commandments.. ie set in stone.
 

Foredeck Shuffle

More of a Stoic Cynic, Anarchy Sounds Exhausting
Oh they do. Definitely. I am aware of examples.

Never knew you Americans were so keen to be compliant to a central authority.
Depending upon the segment of society being addressed, centralized control is only desirable when it is in their own perceived interests.

Some segments demand less control over everything until they find that everything has turned to shit and endangering them directly. Then they want people keelhauled for not ensuring that things would not turn to shit and the new people to fix everything that went to shit. Quickly afterwards demanding a reduction in central control in a non-virtuous cycle of poor logic.
 

Tcatman

Super Anarchist
1,572
162
Chesapeake Bay
Depending upon the segment of society being addressed, centralized control is only desirable when it is in their own perceived interests.

Some segments demand less control over everything until they find that everything has turned to shit and endangering them directly. Then they want people keelhauled for not ensuring that things would not turn to shit and the new people to fix everything that went to shit. Quickly afterwards demanding a reduction in central control in a non-virtuous cycle of poor logic.
I doubt you can find a handful of people who care about the nuts and bolts of generating a handicap table with integrity. As you say... most operate in their perceived interests. The only way out of that doom loop that you describe is to stand on the integrity of the system, data collection, vetting and of course the algorithm (dixie uspn, rya uspn, etc etc) and the authority that comes from the national sailing authority. standing on these two platforms you can get a consensus and buy in.
 
Last edited:

Tcatman

Super Anarchist
1,572
162
Chesapeake Bay
Oh they do. Definitely. I am aware of examples.
I can believe that this could happen on your side of the pond... My guess is that the expectations from handicap racing seem to be different between us. I think its less about rejecting authority and a whole lot more of my dick is bigger... eg my single sail laser like dinghy rating is much much better then your olympic ILCA... My rating is faster!
I watched dealers campaign for provisional ratings so they could market just that feature. Human nature... registers a visceral victory when my faster rated dinghy passes your slower rated dinghy on the water (especially when others are watching) .... numbers be damned!
 

JimC

Not actually an anarchist.
8,248
1,193
South East England
I would like to know the reason for not including a us class into the rya table
The lack of any in the UK and the number being marginal for publication anyway.

I don't know anything about what is agreed or planned for US use of PY online, the topic hasn't to my knowledge been discussed with the advisory group that makes such decisions for the UK. I'm no longer in the group, so will know no more than the rest of you from now on.

My utterly personal suggestion is that US clubs using PYonline should liaise with US Sailing to put them in contact with each other, and then they could use the link facility to produce numbers based solely on US results.
 
Last edited:

Prism

New member
46
25
You need a way of dealing with the uncertainty that you will have in the yardstick number. As discussed, this uncertainty arises from low data numbers, small sample sizes, performance differences in different wind and water conditions, simplifications in the yardstick generating algorithm etc.
How to cope with this uncertainty? Probability based results.
Imagine a Thistle and a Lightning race each other, and on yardstick the Thistle wins by 1 second. Can you really say that it definitely won? No, you can’t. You can say there is a slightly greater than even chance that it won, but that any small error in the handicap means it may not have. If there is a 51% chance of winning it would score 1.49 points vs 1.51 to the Lightning.
If it wins by 20 seconds maybe there is now an 75% it actually won. So it scores 1.25 points vs 1.75 to the Lightning. I.e. the more certain the win, the nearer to 1 point.

This is simple to run as you still only need to enter the time and yardstick. Behind the scenes a standard deviation is generating a probability density function that allows a Monte Carlo analysis of all race result permutations.
 

dogwatch

Super Anarchist
17,981
2,250
South Coast, UK
I doubt you can find a handful of people who care about the nuts and bolts of generating a handicap table with integrity. As you say... most operate in their perceived interests. The only way out of that doom loop that you describe is to stand on the integrity of the system, data collection, vetting and of course the algorithm (dixie uspn, rya uspn, etc etc) and the authority that comes from the national sailing authority. standing on these two platforms you can get a consensus and buy in.
I don’t really think so. That is to put an expectation of perfection on dinghy handicaps that cannot possibly be met. For instance, everyone thinks about wind strength but tide matters too. A slow boat that can barely make progress uptide isn’t on its winning day.
 

Curious2

Anarchist
937
538
Suggests is the weasel word here... I can't see a club telling a sailor that the published rating in the table that is suggested is wrong and they are changing it on their club authority.

Clubs change national numbers sometimes in Australia, just as they do in the UK; my last three clubs have modified the national yardsticks and no-one has worried. In part that may be because we have used the performance of the relevant sailors at their championships as evidence of their standard compared to the rest of the fleet; ie when an Olympic medallist and current world champ is losing races by minutes then it is apparent that their boat is rated wrongly for the area.

One major class here is studying its handicap with the intent of getting it rated faster due to changes in class rules. I don't think it's done for marketing at all, but for the sake of fairness. After all, if you want to get deadly serious then do it at class championships and not at club level.
 
Last edited:

Curious2

Anarchist
937
538
You need a way of dealing with the uncertainty that you will have in the yardstick number. As discussed, this uncertainty arises from low data numbers, small sample sizes, performance differences in different wind and water conditions, simplifications in the yardstick generating algorithm etc.
How to cope with this uncertainty? Probability based results.
Imagine a Thistle and a Lightning race each other, and on yardstick the Thistle wins by 1 second. Can you really say that it definitely won? No, you can’t. You can say there is a slightly greater than even chance that it won, but that any small error in the handicap means it may not have. If there is a 51% chance of winning it would score 1.49 points vs 1.51 to the Lightning.
If it wins by 20 seconds maybe there is now an 75% it actually won. So it scores 1.25 points vs 1.75 to the Lightning. I.e. the more certain the win, the nearer to 1 point.

This is simple to run as you still only need to enter the time and yardstick. Behind the scenes a standard deviation is generating a probability density function that allows a Monte Carlo analysis of all race result permutations.

Very interesting, but what about the issue that the corrected time difference between boats can vary so much according to the conditions? Sometimes on our fluky waterway the time differences can become very much larger than the true skill difference, and the "delta" becomes vastly inconsistent with the usual finishing order. One can see a situation where one boat beats another (perhaps an identical OD) 80% of the time but may end up with a better score for the year because the time difference blew out in one or two races.
 

Tcatman

Super Anarchist
1,572
162
Chesapeake Bay
Clubs change national numbers sometimes in Australia, just as they do in the UK; my last three clubs have modified the national yardsticks and no-one has worried. In part that may be because we have used the performance of the relevant sailors at their championships as evidence of their standard compared to the rest of the fleet; ie when an Olympic medallist and current world champ is losing races by minutes then it is apparent that their boat is rated wrongly for the area.

One major class here is studying its handicap with the intent of getting it rated faster due to changes in class rules. I don't think it's done for marketing at all, but for the sake of fairness. After all, if you want to get deadly serious then do it at class championships and not at club level.
Great points.... some comments. The Class has a different incentive then the dealer selling that class to the rec market. Human nature ...

"Performance of relevant sailors ..." But this approach is antithetical to the dixie pn or the rya pn which if used with integrity simply crunches the numbers. There is nothing wrong with this.... because you are transparent about how you are determining ratings.

"for the sake of fairness" I believe there is a unanimous consensus for this principle. Now the question is how do you get there... the Aussie way, Dixie PN, RYA PN, a dinghy measurement rule, a probabilistic monte carlo stat approach, and for Multihulls, a multihull measurement rule (SCHRS and Texel)

Back to Fordecks USA problem of 7 essentially one off race boats in a dinghy handicap start... I suggest the fastest way to a "fair handicap table" is simply to emulate the Aussies and any US PHRF committee and work the rating using as much circumstantial and stat data as you can. Don't pretend that you are using dixe pn or the rya portsmouth system and just be transparent about your process.
 

Tcatman

Super Anarchist
1,572
162
Chesapeake Bay
I don’t really think so. That is to put an expectation of perfection on dinghy handicaps that cannot possibly be met. For instance, everyone thinks about wind strength but tide matters too. A slow boat that can barely make progress uptide isn’t on its winning day.
I agree that expectations for a handicap race and what you have in a one design race are partially conflated in most racer's heads ... If you asked a one design sailor who loses regularly by 10 to 30 secs a race to estimate his handicap, i am sure you get a wild ass guess. Given this reality... I maintain that all you have is the integrity in how you run the process and the authority afforded by your national sailing authority. The US is simply stuck on a system that can't solve today's reality.
 

Tcatman

Super Anarchist
1,572
162
Chesapeake Bay
You need a way of dealing with the uncertainty that you will have in the yardstick number. As discussed, this uncertainty arises from low data numbers, small sample sizes, performance differences in different wind and water conditions, simplifications in the yardstick generating algorithm etc.
How to cope with this uncertainty? Probability based results.
Imagine a Thistle and a Lightning race each other, and on yardstick the Thistle wins by 1 second. Can you really say that it definitely won? No, you can’t. You can say there is a slightly greater than even chance that it won, but that any small error in the handicap means it may not have. If there is a 51% chance of winning it would score 1.49 points vs 1.51 to the Lightning.
If it wins by 20 seconds maybe there is now an 75% it actually won. So it scores 1.25 points vs 1.75 to the Lightning. I.e. the more certain the win, the nearer to 1 point.

This is simple to run as you still only need to enter the time and yardstick. Behind the scenes a standard deviation is generating a probability density function that allows a Monte Carlo analysis of all race result permutations.
Interesting approach. I distinguish the notion of the accuracy of the rating table (for the classes in the race) with the precision that I have for each rating and what resolution do I have to declare the race winners in tight finishes. Do you have a good way of explaining this concept to a sailor who completely understand a half a boat length loss in his ILCA and the 30 or more boat length loss on the next race?
 

Prism

New member
46
25
FEFBB26B-2A60-49A4-90D2-D52985CF5C87.png

Sorry, the formatting isn’t great, but this is how it works is essence. If the standard deviation was set to zero, these would just be spikes.
 
Top