Today I practiced insight. Whenever I got a glimpse of a mental image I noted everything I could about it and tried to lengthen its appearance. I don't know if that's a good idea or not but I would like to see if attention to the little imagery I have might help it arise more often.
Frisbee today was great, we learned to play zone, and really really looked like we were coordinated as a team out on the field. Afterwards we had dinner and while most of the conversation was about TV shows :S we did talk a bit about this one dude's upcoming 3-minute talk on perspective changes which reminded me about http://lesswrong.com/lw/1o6/adaptive_bias/ so I sent him a link.
I tried calibration exercises with 25%/75% and 25% is very hard. I think I'm going closer to 10% but it's hard to tell without doing a lot of questions. John reminded me I had given short shrift to Data Analysis: A Bayesian Tutorial half a year ago and it's much less of an energy commitment than Bayesian Data Analysis so I'm going to go back to it.
My next math project is to characterize when the following works and doesn't work as an approximation and by how much it fails when:
You want to estimate P(A|e1e2e3...) so you get estimates of lnO(A), lnL(e1|A), lnL(e2|A), etc. But e1, e2, e3, ... aren't independent. So discount them using intuitive simple arithmetic as if they're each a bunch of independent data points but some are shared with another grouping of evidence, or as if another grouping of evidence screens off some of the evidence, then add the results into lnO(A) and get your posterior estimate.
Example: I have either a d6, d8, or d10, each with probability 1/3. I roll my die and a d8 and report the sum X. I reroll the d8 and report the new sum Y. What's P(d6|XY)? Answer: lnO(d6|XY) ~= lnO(d6) + (1-0.5)*lnL(X|d6) + (1-0.5)*lnL(Y|d6) + 0.5*(0.5*lnL(X|d6)+0.5*lnL(Y|d6)). And if that just totally doesn't work, are there simple heuristics for combining correlated lnLs that usually work?