According to the standard account, strategies respond to various possible circumstances. I conceive them rather as responding to various possible future decision situations (including all internal factors). This is far more general since changing decision situations may arise in a foreseeable way not only due to information, as in the standard account, but also due to forgetfulness, endogeneous changes of preferences etc.
The main problem, then, is to state an optimality criterion for such strategies. This is a problem since maximization of expected utility then becomes either unreasonable or even meaningless (maximization of which utility?). The problem is serious, as the widely disagreeing literature on the issue displays. I propose a general solution by essentially referring, in a specific way, to a relation of superiority/ inferiority between possible decision situations (which is part of the agent’s view). This is the first part of the paper.
The second part suggests how this framework provides new theoretical means for dealing with the iterated prisoner’s dilemma thus allowing for a full rationalization of cooperation. The ideas extend to Newcomb’s problem and related cases and are hence relevant to the debate between causal and evidential decision theory.