Show simple item record

dc.contributor.authorIANNI, Antonella
dc.date.accessioned2007-10-13T13:13:30Z
dc.date.available2007-10-13T13:13:30Z
dc.date.issued2007
dc.identifier.issn1725-6704
dc.identifier.urihttps://hdl.handle.net/1814/7155
dc.description.abstractAbstract: This paper studies the analytical properties of the reinforcement learning model proposed in Erev and Roth (1998), also termed cumulative reinforcement learning in Laslier et al. (2001). The stochastic model of learning accounts for two main elements: the Law of E¤ect (positive reinforcement of actions that perform well) and the Power Law of Practice (learning curves tend to be steeper initially). The paper establishes a relation between the learning process and the underlying deterministic replicator equation. The main results show that if the solution trajectories of the latter converge su¢ ciently fast, then the probability that all the realizations of the learning process over a given spell of time, possibly in.nite, becomes arbitrarily close to one, from some time on. In particular, the paper shows that the property of fast convergence is always satisfied in proximity of a strict Nash equilibrium. The results also provide an explicit estimate of the approximation error that could prove to be useful in empirical analysisen
dc.format.mimetypeapplication/pdf
dc.language.isoenen
dc.publisherEuropean University Institute
dc.relation.ispartofseriesEUI ECOen
dc.relation.ispartofseries2007/21en
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectC72en
dc.subjectC92en
dc.subjectD83en
dc.titleLearning Strict Nash Equilibria through Reinforcementen
dc.typeWorking Paperen
dc.neeo.contributorIANNI|Antonella|aut|
eui.subscribe.skiptrue


Files associated with this item

Icon

This item appears in the following Collection(s)

Show simple item record