Check this post for beginnings portion of comparing the different ways to match to a pair.
I wanted to see what the differences were between the A–>B and B–>A types of matching.
1) A is defined as the line of sight with the BIGGER EW. which means that A is always bigger than B, and therefore the r_W value is always positive for A–>B and always negative for B–>A.
2) typically since A is a large EW, it has a more precise measurement=small uncertainty. though this is not always the case
on the fractional values calculated
in prep for the KS testing I measured the fractional probability for each distribution of simulated EWs compared to the observed EW that we were trying to recreate. To do this, create the fractional probability distribution, mark the point at which the observed value is, and count how many simulated values are BELOW this point.
the fractional probability for A–>B are always LOWER than the fractional probability for B–>A
the fractional probability for B–>A are always HIGHER than that of A–>B
So, if you’re usuallly getting a fractional probability that is lower, then the program is calculating EWs that are mostly higher than the observed value, if you are getting fractional probabilities that are higher on average, then you are normally simulating EWs that are mostly lower than your observed value.
AgetB case: the program has made a match to A (the larger EW), now needs to get a bunch of simulated EWBs. By virtue of the observed B just being on average lower than the observed A, it’s more likely to have simulated values being larger than observed B. Vice-versa for when matching to B first. ….
0.502000 0.345000 0.0690000 0.251111 0.290000 0.335556 —-positive is 2x
0.972000 0.586000 0.105000 0.197778 0.247778 0.296667 —-positive is 2.5x
1.51000 1.45600 0.244000 0.441111 0.520000 0.580000 —-positive same
2.00000 1.35000 0.400000 0.0255556 0.171111 0.432222 —-positive is 5x
1.07000 0.767000 0.0900000 0.231111 0.278889 0.346667 —-positive is 3x
if the vast majority of the EWs being generated for the AgetB is finding values LARGER than observed EWB
and the vast majority of EWs being generated for the BgetA is finding values SMALLER than the observed EWA…what does that mean?
-uncertainty issue? most of the time the observed A has a much more precise measurement (don’t think so)
-simply an artifact of the fact that A is BIGGER, thus more likely to find things smaller than it, whereas
B is SMALLER and therefore more likely to find things smaller than it
0.345000 0.502000 0.0110000 0.551111 0.555556 0.561111
0.586000 0.972000 0.0510000 0.615556 0.633333 0.660000
1.45600 1.51000 0.0680000 0.481111 0.503333 0.526667
1.35000 2.00000 0.400000 0.735556 0.906667 0.974444
0.767000 1.07000 0.190000 0.544444 0.713333 0.813333
By virtue of the nomenclature (i.e. where EW_A>EW_B), the reasoning for having the different probabilities for A–>B and B–>A make sense. Since A is always a larger EW, the value B that we would be generating simulated EWs for will be slightly lower. Therefore, it will on average be finding smaller values of fractional probabilities. The opposite works as well. Since A is larger, when we generate a distribution of EWs for it, it will, on average, be finding larger fractional probabilities. This works with the model, so you cannot use one side without the other, it would be biasing the results.
in the cases where we ONLY have the A measurement, and B is a sensitive upper limit, we don’t worry because we CANNOT match to B by built in constraints.