This is a book by Eliezer Yudkowsky on rationality. I would strongly recommend this to anyone interested in the subject and below I have put a few notes on bits I found particularly interesting.
scope insensitivity: what would you pay to save 2000/20,000/20,000 birds? ideas is that you have a prototype bird and the emotional arousal you feel for this one bird decides what you pay so even if there is exponential increase in scope you have linear willingness to pay increase (this is valuation of the prototype). another ideas is that you purchase moral satisfaction, so you spend enough to make yourself feel good and this doesn’t need to be correlated to the number of birds. I don’t like this question as there are so many caveats that would effect an answer given that it seems almost meaningless. this phenomenon was found with sensitivity to human life and we get a similar situation to Webers law where the ‘just noticeable difference’ is a constant proportion of the whole
positive bias: the triplet 2-4-6 fits a rule, you can guess other triplets and I will tell you if they fit my rule and you can guess at anytime. the rule was ascending numbers but only 21% successfully guessed it with most guessing goes up in 2’s. most tried to generate positive rather than negative examples
conjunction fallacy: in peoples minds the implausibility of one claim is compensated by the plausibility of another e.g Russia invades Poland due to suspension of diplomatic relations between USA and USSR or just Russia invades Poland. the second has to be more likely but the narrative of the first seems more convincing. to train against this you must be very sensitive to the use of the word ‘and’
planning fallacy: when asked when students would finish their essays only 45% finished in time of their 99% level which is mad. this occurs as well when people are asked for realistic scenario of project and many people view everything going as planned. one way to help is to take the outside view, how long broadly similar projects took.
Seeing if explanations are useful: A few ways to see if an explanation is of use.
- if someones argument can anticipate any event happening then it is not useful e.g phlogiston was thought to be what fire was but it was used to explain any occurrence and if it can explain all occurrences it can explain none of them. to explain every anticipation is to explain nothing.
- The second method is about inverting a message to see if it contains substance. If the reverse sounds very abnormal that is likely due to the unreversed being very normal and so not likely to have new information. Try inverting any statement in the following and you will see none of it is useful. “I am here to propose to you today that we need to balance the risks and opportunities of advanced artificial intelligence. We should avoid the risks and, insofar as it is possible, realise the opportunities. We should not needlessly confront entirely un- necessary dangers. To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm. We should respect the interests of all parties with a stake in the Singularity. We must try to ensure that the benefits of advanced technologies accrue to as many individuals as possible, rather than being restricted to a few. We must try to avoid, as much as possible, violent conflicts using these technologies; and we must prevent massive destructive capability from falling into the hands of individuals.”.
- Thirdly one may want to see if there are semantic stop signs, these are where answers are given to a question but really answer very little and yet people often won’t probe further. For example what is wrong with a foreign government? Well they don’t have a democratic leadership. But why does this solve their issues, how has it performed in the past? In his own work he once had someone who wanted to be a research assistant so he asked him ‘how may an AI discover how to solve a Rubik’s cube?’. The guy answers using the word complexity and Eliezer said to not use it as it doesn’t explain anything. guy said ‘oh like using emergence so now I have to think about how the phenomenon would actually happen’.
suspending judgement: Many people talk ethics to mean that all judgments are as plausible as one another. However, taking the side of neutrality is just as attackable a position as taking a particular side as it means that seeing what is in front of you, you see the two sides as equal. it is also not the case that taking a position necessarily means you are losing objectivity, for example judges take sides without loosing reputation for impartiality. Paolo Freire quote on suspending judgement, ‘washing one’s hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral’
Illusion of transparency in communication: Keysar, “The Illusory Transparency of Intention – Subjects were told that mark visited a restaurant and sent his friend June a message saying that the food was “was marvellous, just marvellous” after having either good or bad food. Of the ones ones who were told mark had bad food 59% thought June would perceive sarcasm. In the group that were told he had good food only 3% thought he would perceive sarcasm. Subjects also though that if mark thought the food was bad but wanted to conceal sarcasm then June wouldn’t perceive sarcasm
How much evidence do you need: If there is a lottery the odds of winning are 1:100 million and you have a device whose strength at telling you the winning numbers is 1million:1 then you would still expect to be wrong using it 99% of the time, even if it has a degree of accuracy of 100million:1 then your chances of winning if you can only buy one ticket are 50/50. From a bayesian perspective you need amount of evidence roughly equivalent to complexity of hypothesis just to locate the hypotheses, therefore given large number of theories just being able to find one means you likely have decent evidence for it. Einstein probably had overwhelmingly more evidence than would be required for bayesian to assign very high confidence in general relativity as without so much confidence he would likely not have even been about to come up with the theory. The more complex an explanation is the more evidence you need to find it in belief-space. In probability a mathematical bit is a logarithm base 1/2 of a probability, e.g something with probability 1/8 transmits 3 bits of information as 0.5^3 =1/8.
How to have/update beliefs: A rational belief is only really worthwhile if you could be persuaded to believe otherwise. in probability theory absence of evidence is evidence of absence, this is because you use this absence to update believes to wards view of absence of thing. on average you must expect to be exactly as confident as when you started experiment, this is because if you were very confident in theory then confirming evidence would not shift confidence much but would be likely however disconfirming evidence while not being likely would shift confidence down a lot. Therefore you can only seek evidence to test theory not confirm it. If a hypothesis does not today have favourable likelihood ratio over ‘i don’t know’ it raises question of why today believe anything more than ‘I don’t know’. the strength of a hypothesis is what it can’t explain not what it can. In his own work when he finds a gap he isn’t fully sure about cause and effect he uses the word magic to highlight he is unsure of what is happening in a situation. The wise would extrapolate from memory of small hazard to possibility of large ones.
How not to have/update beliefs: during WWII California governor Earl Warren testified saying that due to there being no attack by domestic Japanese’s citizens on American soil he was even more convinced it was coming as they were lulling USA into false sense of security. This cannot be true as the likelihood is still high in the absence of Japanese’s terror cell that there would be an absence of sabotage. his words were, “I take the view that this lack [of subversive activity] is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbour was timed . . . I believe we are just being lulled into a false sense of security”. We can often use our beliefs to cloud ourselves from what the other side may view themselves as. For example to even insinuate terrorists could be heroic self sacrificers in anyones eyes is bordering on heresy even though that is very surely what they view themselves as and acknowledging this can be useful to combat them.
Hindsight bias: The historian Arthur Schlesinger, Jr. dismissed scientific studies of World War II soldiers’ experiences as “ponderous demonstrations” of common sense. Taking examples from Lazarsfeld, “The American Soldier—An Expository Review:
- Better educated soldiers suffered more adjustment problems than less educated soldiers. (Intellectuals were less prepared for battle stresses than street-smart people.)
- Southern soldiers coped better with the hot South Sea Island climate than Northern soldiers. (Southerners are more accustomed to hot weather.)
- White privates were more eager to be promoted to noncommissioned officers than Black privates. (Years of oppression take a toll on achieve- ment motivation.)
- Southern Blacks preferred Southern to Northern White officers. (Southern officers were more experienced and skilled in interacting with Blacks.)
- As long as the fighting continued, soldiers were more eager to return home than after the war ended. (During the fighting, soldiers knew they were in mortal danger.)
the answers to these questions were actually the opposite of what was found. How many times did you think your model took a hit? How many times did you admit you would have been wrong? Unless, of course, I reversed the results again. What do you think? This happens in other situations where people rate chance of supposed findings of scientific study more highly after being told results, even if results they were told were wrong. One large issue with this is that it can lead people to think they have no need for science as they ‘could have predicted that’. Just as you would expect right? This last conclusion shows how even when reading about hindsight bias you can fall foul to it. Hindsight bias is strong reason to get scientists to write down experiment predictions in advance. It also means that how well people are able to predict things that have already happened is of limited use to you as they will likely be very overconfident in how well they would do.
Inferential distance or why we used to understand each other: in ancestral lands you were unlikely to be more than one inferential step away from anyone else, everyone knew the same ideas e.g what oasis was, the only thing you could know that they didn’t was private information e.g where a new oasis was, but they would all know what you were talking about. However, in modern day this is not the case and so when explaining something you may forget that they are many steps away from you. If a biologist is talking about evolution and the person rejects it even after the explanation that it’s the simplest explanation (a point that is very heavily in favour of evolution) then the non scientist must be an idiot. but the scientist forgets that at one point the argument about simplicity wouldn’t have struck them as convincing either. This reenforces how important establishing prior information is and that when talking to others you must make arguments based on previously established points.
What to do in random environment: when told that in a deck of cards 70% were blue and 30% red many subjects when guessing cards would guess blue 70% of the time and red 30% of the time not blue all the time like should be done. It seems counterintuitive that given incomplete information the optimal strategy does not resemble the typical sequence of cards. rationality works well with rational people but even faced with irrational opponent throwing away reason is not going to help .
mysterious substances: At one point Lord Kelvin believed figuring out why living matter moves was not just a little beyond science but infinitely beyond and so used a substance called Élan Vital as reason for all living matter, even though this gives no answers and isn’t testable. To worship phenomenon as it seems wonderfully mysterious is to worship your own ignorance and vitalism and phlogiston both encapsulate mystery as a substance whereby the substance anticipated everything but therefore did not alleviate the mystery
Science vs History: science is made up of generalisations which apply to many particular instances where you can verify them for yourself and is the publicly reproducible knowledge of humankind. This means that Historical knowledge is not scientific as it is not reproducible. when wondering what is possible though remember how century after century the world change in ways you would not guess and ways no-one predicted.
cognition vs literature: when publications e.g wired magazine talk about AI and referencing things like the terminator movies, they aren’t speaking from within model of cognition they are speaking from model of literature without realising the difference. This links to a logical fallacy of generalising from fictional evidence and treating it as some alternative history whereas it is not one. a story can never be a truly rational attempt at analysis. things that happen in movies are not implausible they are unreal
The obvious answer: a teacher invites physics class to touch metal plate where the end close to a radiator is cold and the end far away is hot and invites suggestions as to why. class guess things like heat conduction but this would also explain why the side closest to the radiator would be hottest and if you are equally good at explaining any outcome you have zero knowledge. (he turned the plate around)