blah

advertisement

Appendix A

We adapted the win-stay-lose- shift (WSLS) model from Worthy&Maddox (2014): 𝑝(π‘ π‘‘π‘Žπ‘¦|𝑀𝑖𝑛) and 𝑝(π‘ β„Žπ‘–π‘“π‘‘|π‘™π‘œπ‘ π‘ ) is defined as: 𝑝(π‘ π‘‘π‘Žπ‘¦|𝑀𝑖𝑛) = 𝑝(π‘Ž 𝑑+1

|π‘β„Žπ‘–π‘œπ‘π‘’ 𝑑

= π‘Ž π‘Žπ‘›π‘‘ π‘Ÿ(𝑑) ≥ 𝑑(𝑑 − 1)), 𝑝(π‘ β„Žπ‘–π‘“π‘‘|π‘™π‘œπ‘ π‘ ) = 𝑝(π‘Ž 𝑑+1

|π‘β„Žπ‘–π‘œπ‘π‘’ 𝑑

= π‘Ž π‘Žπ‘›π‘‘ π‘Ÿ(𝑑) < 𝑑(𝑑 − 1)).

On every trial, they are updated: 𝑝(π‘ π‘‘π‘Žπ‘¦|𝑀𝑖𝑛) 𝑑+1

= 𝑝(π‘ π‘‘π‘Žπ‘¦|𝑀𝑖𝑛) 𝑑

+ πœƒ 𝑝( π‘ π‘‘π‘Žπ‘¦

| 𝑀𝑖𝑛

)

(1)

(2)

(3)

× (𝑝(π‘ π‘‘π‘Žπ‘¦|𝑀𝑖𝑛) π‘“π‘–π‘›π‘Žπ‘™

− 𝑝(π‘ π‘‘π‘Žπ‘¦|𝑀𝑖𝑛) 𝑑

) 𝑝(π‘ β„Žπ‘–π‘“π‘‘|π‘™π‘œπ‘ π‘ ) 𝑑+1

= 𝑝(π‘ β„Žπ‘–π‘“π‘‘|π‘™π‘œπ‘ π‘ ) 𝑑

+ πœƒ 𝑝( π‘ β„Žπ‘–π‘“π‘‘

| π‘™π‘œπ‘ π‘ 

)

× (𝑝(π‘ β„Žπ‘–π‘‘|π‘™π‘œπ‘ π‘ ) π‘“π‘–π‘›π‘Žπ‘™

− 𝑝(π‘ β„Žπ‘–π‘“π‘‘|π‘™π‘œπ‘ π‘ ) 𝑑

) 𝑝(π‘ π‘‘π‘Žπ‘¦|𝑀𝑖𝑛) π‘“π‘–π‘›π‘Žπ‘™

, πœƒ 𝑝( π‘ π‘‘π‘Žπ‘¦

| 𝑀𝑖𝑛

)

, 𝑝(π‘ β„Žπ‘–π‘‘|π‘™π‘œπ‘ π‘ ) π‘“π‘–π‘›π‘Žπ‘™

, πœƒ 𝑝( π‘ β„Žπ‘–π‘“π‘‘

| π‘™π‘œπ‘ π‘ 

)

are free parameters.

(4)

At the same time the Q values of each action-state pair is updated with reinforcement learning:

𝑄(𝑠 𝑑

, π‘Ž 𝑑

) ← 𝑄(𝑠 𝑑

, π‘Ž 𝑑

) + 𝛼(π‘Ÿ 𝑑

+ 𝛾 max π‘Ž′

𝑄(𝑠 𝑑+1

, π‘Ž′) − 𝑄( 𝑠 𝑑

, π‘Ž 𝑑

)).

The win-stay-loss-shift and reinforcement learning assumptions are combined though: if π‘Ÿ(𝑑) ≥ 𝑑(𝑑 − 1) ,

(5) if π‘Ÿ(𝑑) < 𝑑(𝑑 − 1) ,

𝑉(𝑠 𝑑

, π‘Ž 𝑑

) ← 𝐾𝑀𝑠𝑙𝑠 ∗ 𝑝(π‘ π‘‘π‘Žπ‘¦|𝑀𝑖𝑛) 𝑑

+ (1 − 𝐾𝑀𝑠𝑙𝑠) × π‘„(𝑠 𝑑

, π‘Ž 𝑑

), (6)

𝑉(𝑠 𝑑

, π‘Ž 𝑑

) ← 𝐾𝑀𝑠𝑙𝑠 ∗ 𝑝(π‘ β„Žπ‘–π‘“π‘‘|π‘™π‘œπ‘ π‘ ) 𝑑

+ (1 − 𝐾𝑀𝑠𝑙𝑠) × π‘„(𝑠 𝑑

, π‘Ž 𝑑

).

In the end, we also assume a softmax action selection:

(7)

𝑃(𝑠 𝑑

, π‘Ž 𝑑

) = exp (𝛽 ⋅ 𝑉(𝑠 𝑑

, π‘Ž

3 π‘Ž′=0 exp (𝛽 ⋅ 𝑉(𝑠 𝑑 𝑑

))

, π‘Ž ′ ))

.

(8)

This model has 10 free parameters and yielded worse BIC scores than our FPEQ model (see Table 1).

1

Table S1. WSLS Model Parameters

WSLS-Q Parameter lr

PStayWin_initial

PStayWin _final

PShiftLoss_initial

PShiftLoss_final

Young

Mean(SD)

0.08 (.06)

0.50 (.02)

0.41 (.4)

0.46 (.47)

0.68 (.33)

Elderly

Mean(SD)

0.04 (.05)

0.50 (.01)

0.50 (.39)

0.37 (.48)

0.61 (.35) lr_StayWin lr_ShiftLoss

Decay

Kwsls

0.64 (.42)

0.74 (.30)

0.65 (.46)

0.11 (.08)

0.76 (.35)

0.81 (.25)

0.56 (.49)

0.12 (.08)

Exploration 13.21 (5.76) 12.12 (6.35)

The model-based Q model

We also implemented a model-based Q learner, which utilizes experience with state transitions to estimate the probabilities 𝑇(𝑠, π‘Ž, 𝑠′) of transferring from state 𝑠 to state 𝑠′ by having taken an action π‘Ž .

𝑇(𝑠, π‘Ž, 𝑠

′ ) ← 𝑇(𝑠, π‘Ž, 𝑠 ′ ) + 𝛼

1

(𝛿 𝑠,𝑠

− 𝑇(𝑠, π‘Ž, 𝑠′)).

(9) 𝛿 𝑠,𝑠

∈ {0,1} is a binary indicator that 𝛿 𝑠,𝑠

= 1 for the observed transition and 𝛿 𝑠,𝑠

= 0 for all the states not arrived.

The reward at each state is estimated as:

𝑅(𝑠, π‘Ž) ← 𝑅(𝑠, π‘Ž) + 𝛼

2

(π‘Ÿ − 𝑅(𝑠, π‘Ž)).

Then, the value function can be calculated with the transition and reward functions:

(10)

𝑄(𝑠, π‘Ž) ← 𝑅(𝑠, π‘Ž) + 𝛾 ∑ 𝑇(𝑠, π‘Ž, 𝑠′) max π‘Ž′

𝑄(𝑠

, π‘Ž′).

𝑠′

(11)

Having learned the value function 𝑄(𝑠, π‘Ž) , it is possible to select an action at state 𝑠 according to the values of each action in this state. Here we use a sofmax distribution:

𝑃(𝑠, π‘Ž) = exp (𝛽 ⋅ 𝑄(𝑠, π‘Ž))

∑ 3 π‘Ž′=0 exp (𝛽 ⋅ 𝑄(𝑠, π‘Ž ′ ))

.

(12)

2

There are four free parameters in this model: the learning rate in the estimating the transition model 𝛼

1

, the learning rate in estimating the reward function 𝛼

2

, the discount factor in the updating the value function 𝛾 , and the exploration parameter 𝛽 . The model based Q-model yielded worse BIC scores than our FPEQ model (see Table 1).

Table S2. Model-based Q Parameters

Model-Based-Q

Parameter lr_R lr_S decay

Young

Mean(SD)

.25 (.27)

0.32 (.34)

0.85 (.26)

Elderly

Mean(SD)

.09 (.07)

.33 (.28)

.76 (.37) exploration 7.07 (6.22) 3.22 (3.69)

Table S3. Correlations between measures of intelligence and FPEQ parameters in the elderly group

TD Learn

Rate

FPE+ FPE- Decay Exploitation QSA Abs.TD

LPS3 -.23 .02 -.13 0 .05 -.24 .35

-.01 -.08 .22 0 -.35 .17 .001 LPS4

All p’s > .2

3

Appendix B

FPEQ generative model.

To examine the effect of the empirically observed model parameters on choice behavior, we simulated the FPEQ model with parameters fixed to either the young or elderly group’s best fitting parameters. The simulation gives a series of states in exactly the same format as the participants’ data.

The figure below shows a plot of the fraction of choosing each path for the observed (left) and simulated

(right) data for the young (top) and elderly (bottom) age groups. Paths 1-4 end in states 4-7, respectively, so that path 1 is the most lucrative path, and path 4 is the least lucrative path. It shows that the young group increasingly chose the most lucrative path 1 (blue line), whereas the elderly group increasingly chose the least lucrative path 4 (purple line). Model simulations yoked to the young and elderly groups’ best fitting parameters, respectively, reproduced their respective preferences for paths 1 and 4.

Figure S1.

4

Download