Cardinal Contests Arpita Ghosh Patrick Hummel April 3, 2014

advertisement
Cardinal Contests
Arpita Ghosh∗
Patrick Hummel†
April 3, 2014
Abstract
We model and analyze cardinal contests, where a principal running a rank-order tournament
has access to an absolute measure of the quality of agents’ submissions in addition to their relative
rankings. A simple mechanism that compares each agent’s output quality against a threshold to
decide whether or not to award her the prize corresponding to her rank improves equilibrium effort
relative to the rank-order tournament with the same prizes. Our main result is that such simple
threshold mechanisms are also optimal amongst the set of all mixed cardinal-ordinal mechanisms
where the j th -ranked submission receives a fraction of the j th prize, where the fraction can be any
arbitrary non-decreasing function of the submission’s quality. Furthermore, the optimal threshold
mechanism uses exactly the same threshold for each rank.
JEL classifications: C72; D82
Keywords: Contests; Optimal contest design; Crowdsourcing; Game theory
∗
†
Cornell University, Ithaca, NY, USA. E-mail address: arpitaghosh.work@gmail.com
Google Inc., Mountain View, CA, USA. E-mail address: phummel@google.com.
1
1
Introduction
Contests have a long history as a means for the procurement of innovations, with government-sponsored
contests for research and development dating back to at least 1714, when the British parliament ran a
contest with a 20000 pound prize for a method for determining longitude-at-sea to within half a degree.
Contests provide an effective incentive structure for eliciting effort in settings where the quality of an
agent’s output as well as her effort input are unverifiable or difficult to measure, so that conventional
contracts based on input or output-dependent reward are infeasible—either by virtue of being too costly
for the sponsor to implement, or because they are not credible by virtue of being unenforceable due
to unverifiability of output (Che and Gale 2003; Taylor 1995). In several of these situations where a
principal cannot verifiably or cheaply measure the absolute quality of an agent’s output, making an
ordinal comparison, i.e., identifying a relative ranking of agents’ submissions, might nonetheless be
feasible, allowing the principal to commit to an enforceable contract that awards rank-based prizes to
some subset of entrants in a contest. This has consequently led to a large literature on the optimal
design of contests, structured as rank-order tournaments, as an incentive mechanism for effort in such
settings (e.g., Che and Gale 2003; Glazer and Hassin 1988; Moldovanu and Sela 2001; Taylor 1995; see
§1.1).
In contrast with more traditional settings, however, an increasing number of contests are intended to
procure innovations whose ‘goodness’ is evaluated via a verifiable cardinal measurements. For instance,
the Netflix contest is a well-known contest for innovation procurement in this category—the contest
solicited recommender systems that automatically make personalized recommendations of movies to
users based on their, as well as other users’, existing ratings of movies. The Netflix contest evaluated
submissions by measuring their ability to match users’ preferences on a test subset of its database
of users and their movie preferences, with the algorithm that obtained the highest score being the
winner. In a different context, contests for designing mobile apps for health or education1 might score
submissions based on their performance on metrics of efficacy in randomized trials. As another example,
the website Kaggle hosts data mining contests, where the innovation being procured is a data-mining
algorithm, and submissions are typically evaluated on some test dataset and ranked according to the
score they obtain on that dataset. In short, there is now an increasingly large family of contests where
1
See, for one example, http://www.robinhood.org/prize
2
each submission in the contest can be assigned a meaningful numerical quality score that in some way
reflects its value to the principal.
Suppose a principal running a contest also has access to such cardinal measurements qi of the quality
of agents’ submissions, in addition to the ordinal ranking of their outputs. Can using this cardinal
information to determine agents’ rewards improve incentives for effort? What effect does making
rewards contingent on absolute performance, in addition to relative ranking, have on participants’
incentives for effort, especially amongst risk-averse contestants2 ? And if indeed cardinal information
can help, what is the ‘optimal’ way to incorporate such cardinal information into a reward mechanism?
Our contributions. We model and analyze cardinal contests, where a principal can evaluate
the quality of each submission via a real-valued score in addition to observing the rank-ordering of
contestants’ outputs. Consider a principal whose objective is to maximize some arbitrary function of
the quality of the submissions, such that higher quality submissions lead to higher objective values
than lower quality submissions. We ask whether and how such a principal can improve her utility by
incorporating such cardinal information into agents’ rewards.
Specifically, we consider a rank-order tournament with prizes (A1 , A2 , . . . , An ) for ranks 1, . . . , n, in
a model where contestants are strategic and have a cost to effort. We ask whether the principal running
the tournament can improve equilibrium effort relative to (A1 , A2 , . . . , An ) by using an agent’s absolute
quality score qj in addition to her rank j to decide what portion of the prize Aj to actually award to
her, and if so, how the function gj (qj ) that determines the fraction of the prize Aj to be awarded to the
j th -ranked agent should be chosen among all nondecreasing functions gj with 0 ≤ gj (q) ≤ 1 to create
the optimal incentives for the agents.
We begin by first showing that cardinal information can always be used to improve the principal’s
utility regardless of whether the agents are risk-averse or risk-neutral as long as an agent’s cost of effort
is sufficiently convex: a simple threshold mechanism that compares each agent’s output quality against
a threshold and awards her the prize Aj corresponding to her rank j if and only if her output meets or
exceeds the threshold, can always improve equilibrium effort relative to (A1 , A2 , . . . , An ) (Theorem 3.2),
and thus increase the principal’s utility. Indeed, real-world contests where prizes are made contingent
on absolute performance seem to commonly use such threshold-based prize structures.3
2
For instance, decreasing prizes, and more subtly, increasing the number of contestants, typically both lead to decreases
in equilibrium effort; arguably, making rewards depend on absolute performance could potentially cause a similar effect.
3
For example, one of the largest contests hosted on Kaggle awarded prizes to the top three entries, provided their
3
An immediate question that follows is whether there is a justification, beyond simplicity, to restrict
oneself to using absolute quality scores as coarsely as in a threshold mechanism, and more generally,
what function of absolute quality (such as fractional rewards that grow linearly or as a convex function
of output quality) creates the strongest incentives for effort. Our main result is that these simple threshold mechanisms are, in fact, optimal amongst all mechanisms induced by the functions gj (including
mechanisms which use a different function of quality to determine the prize for each possible rank)—we
show that for any given rank-order mechanism M(A1 , A2 , . . . , An ), the functions gj (q) that incentivize
the highest effort are precisely step functions that increase from 0 to 1 at a threshold score tj (Theorem
4.1). Additionally, if the noise density that stochastically maps an agent’s effort into her output quality
is single-peaked at 0, the optimal threshold mechanism also uses exactly the same threshold for each
rank, i.e., t∗j = t∗ for all j. We derive comparative statics for the optimal threshold and equilibrium
effort in such mechanisms in §5.1, and conclude by extending our results to models with endogenous
entry in §6.
We note that we ask the question of how to optimally modify a given rank-order mechanism (A1 , A2 , . . . , An ), rather than the question of choosing the overall optimal reward structure
(V1 (q1 , . . . , qn ), . . . , Vn (q1 , . . . , qn )) that incorporates all available cardinal information. There are a
number of reasons for this choice—a principal announcing a contest might choose, or be committed
to, a certain rank-based prize structure for practical reasons beyond equilibrium incentives for effort,
such as sponsorship constraints, publicity, simplicity, cost of precise evaluations of a full rank-order due
to scale, and so on. However, she might still want and be able to incorporate the absolute utility in
determining which, and how much, of the announced rank-based prizes to actually award (for instance,
she may wish to award no prize if the highest-ranked submission performs worst than the current
state-of-the-art innovation). A number of other factors also contribute to motivating this choice; see
§2 and §7 for a more complete discussion.
1.1
Related Work
There is a now a very large body of work on contest design in the economics literature. In addition to
general work on the theory of contests (Green and Stokey 1983; Liu et al. 2013a; Minor 2013; Moldovanu
scores were above a minimum baseline.
4
and Sela 2001; 2006; Moldovanu et al. 2007; Parreiras and Rubinchik 2010; Riis 2010; Siegel 2009;
2010; 2012; Szymanski and Valletti 2005), there has also been a variety of work motivated by specific
applications. Glazer and Hassin (1988), Lazer (1989), Lazer and Rosen (1981), and Nalebuff and
Stiglitz (1983) address the design of rank-order tournaments for the purpose of incentivizing employees
to work hard. Che and Gale (2003) and Taylor (1995) study contest design in the context of research
tournaments. And there is a growing literature motivated by online crowdsourcing contests (Archak
and Sundarajan 2009; Cavallo and Jain 2013; Chawla et al. 2012; DiPalantino and Vocnović 2009;
Ghosh and McAfee 2012). There is also an extensive empirical literature analyzing observed strategic
behavior by real subjects in contests and other competitive incentive schemes in a variety of settings
(Archak 2010; Boudreau et al. 2011; Cadsby et al. 2007; Carpenter et al. 2010; Chen et al. 2010;
Dohmen and Falk 2011; Eriksson and Villeval 2008; Lazear 2000; Liu et al. 2013b; Morgan et al. 2012).
This large body of work addresses a variety of questions relating to the economics of contests such
as comparing tournaments to schemes for incentivizing effort that are based solely on an individual’s
personal performance (Green and Stokey 1983; Lazear and Rosen 1981; Nalebuff and Stiglitz 1983;
Riis 2010), relationships between contests and all pay auctions (Chawla et al. 2012; Che and Gale
2003; DiPalantino and Vocnović 2009), taxing entry to improve the quality of contributions (Ghosh
and McAfee 2012; Taylor 1995), dynamic contests in which agents dynamically decide how much effort
to exert when they continuously get information about how they are doing in the contest compared
to their competitors (Fu and Lu 2012; Konrad 2010), and incentives for agents to work hard in teams
(Holmstrom 1982). The most relevant subset of this literature to our paper is that relating to optimal
contest design (Archak and Sundarajan 2009; Glazer and Hassin 1988; Liu et al. 2013a; Minor 2013;
Moldovanu and Sela 2001; 2006; Moldovanu et al. 2007; Szymanski and Valletti 2005), which asks how
to choose the rank-based rewards Aj for each rank 1 ≤ j ≤ n to optimize various objective functions
of the principal, under various models of effort and constraints on the rewards.
The key difference between this literature, especially that focusing on the optimal design of contests,
and our work is that this literature almost exclusively studies contests that are structured as rank-order
mechanisms, where the announced prizes depend only on the rank of an agent’s output relative to that
of her competitors—that is, it largely studies purely ordinal mechanisms, whereas we consider mixed
cardinal-ordinal mechanisms of the form described in §2. Specifically, rather than ask which ordinal
5
rewards (A1 , A2 , . . . , An ) incentivize optimal outcomes, we ask how cardinal information about an
agent’s output can be optimally incorporated into a given ordinal reward mechanism (A1 , A2 , . . . , An )
to incentivze the highest effort, a question that has not been addressed previously in this literature.
The only relevant exception, to the best of our knowledge, is the work in Chawla et al. (2012) which
does address the question of optimal contest design using both cardinal and ordinal information, albeit
in a completely different model of output and for risk-neutral agents only. Interestingly, although the
models of output are completely different, the optimal mechanism in that model also turns out to use
cardinal information only via a threshold.
In addition to this key difference, we also note that we do not make any of a variety of simplifying
assumptions used in several papers in this literature, such as risk-neutrality on the part of the agents,
linearity of agents’ cost functions, the assumption that there is no stochastic component that affects
the final quality of an agent’s contribution, or assuming a particular functional form for the principal’s
objective function.
2
Model
We consider a setting where a pool of agents strategically choose their effort level, which stochastically
determines their submission qualities, in a contest which awards prizes to each agent based on the
relative rank of her output as well as possibly its absolute quality. We first analyze such contests with
a fixed, exogenously given, number n of participants who only make strategic effort choices, and extend
our analysis to an endogenous entry model, where agents also decide whether or not to enter the contest
in addition to choosing their effort levels, in §6. The model is described formally below.
There are n agents who compete in a contest. Each agent simultaneously chooses a level of effort
ei ≥ 0 to exert on her entry. The quality qi of agent i’s output is determined both by her effort, and a
random noise term ²i
qi = ei + ²i ,
where each ²i is an independent and identically distributed draw from a cumulative distribution function
F (²). We will assume throughout that (i) the probability density function f (²) corresponding to F is
6
a bounded continuously differentiable function of ² with a bounded first derivative, and (ii) that the
distribution F , as well as the number of agents n, is common knowledge to all agents as well as the
principal designing the contest.
The noise ²i has a number of possible interpretations—most simply, ²i could model the fact that
a given amount of effort (for example, the time spent working on a research question) does not deterministically guarantee a certain level of output, but rather only influences its expected value; as
another interpretation, the noise ² could also model randomness in the measurement, or perception,
of an agent’s output quality by the principal. Most interestingly, ²i could be thought of as modeling
heterogeneity amongst agents in terms of their abilities to solve the specific problem or execute the
specific task required for the contest—the pool of contestants a priori all have similar skills or ability
for the subject of the contest, modeled by the fact that their ²i are all drawn from the same distribution
F (for example, programmers with some level of expertise in a particular programming language, or
graphic designers with similar skill levels), but agents each come up with solutions of different qualities
for the specific task posed by a particular contest. That is, while agents are a priori homogenous in the
sense that their noise terms are drawn from the same distribution F , their final outputs differ based on
the specific contest-dependent problem. We note here that the contest design literature that models
heterogeneity amongst agents often does so by modeling output qi as ability-scaled effort ai ei , where
agents’ abilities are all randomly drawn from a single distribution F . A logarithmic transformation of
variables from qi = ai ei in those models yields exactly our model where qi = ei + ²i —the key difference
between the two models is that agents observe their random draws of ai before making their strategic
effort choices in those models, whereas agents do not observe their draws of ²i prior to making their
effort choices in our model (i.e., an agent does not know how well she will do on the task until she
actually does it).
Utilities. An agent’s utility is the difference between her effort cost and her benefit from any prize
she wins in the contest. An agent who receives a prize Pi derives a value, or benefit, of v(Pi ), where
v(·) is a strictly increasing and concave function satisfying v(P ) ≥ 0 for all P ≥ 0; we also assume
(without loss of generality) that v(0) = 0. Exerting effort ei incurs a cost c(ei ), where c(·) is a strictly
increasing and convex function satisfying c(e) ≥ 0 for all e ≥ 0 and c0 (0) = 0. We begin our analysis by
considering the case where c(0) = 0, so that all agents have an incentive to participate in equilibrium;
7
in §6 we extend our analysis to consider the case where c(0) > 0 and agents make endogenous decisions
about whether to participate in equilibrium.
An agent who exerts effort ei and receives prize Ai obtains utility ui = v(Ai ) − c(ei ). We assume
that each agent chooses ei to maximize her expected utility E[v(Ai )] − c(ei ), where the expectation is
over the n random draws of ²i0 that determine each agent’s output quality given her effort. We note
that our formulation of an agent’s utility captures both risk-neutral and risk-averse agents, as v(P ) may
either be a linear function of P (in which case agents are risk-neutral) or a strictly concave function of
P (corresponding to risk-averse agents).
Mechanisms. We suppose that the principal running the contest can observe the quality qi of an
agent’s output, and therefore also the relative rankings of all agents (our results extend immediately
to a model where the principal’s observations of agent’s output qualities are noisy, as long as the noise
in observation is drawn IID from the same distribution for each agent). We use (A1 , A2 , . . . , An ) to
denote a rank-order mechanism which assigns a reward Aj to the agent with the j th -highest quality
output, regardless of its absolute quality, and assume throughout that A1 ≥ . . . ≥ An ≥ 0.
Let qj denote the quality of the j th -ranked submission. We consider mixed cardinal-ordinal modifications of a rank-order mechanism (A1 , A2 , . . . , An ) of the form (g1 (q1 )A1 , . . . , gn (qn )An ), which awards the
agent with the j th -ranked submission of quality qj a prize Pj = gj (qj )Aj , where gj (q) is a non-decreasing
function of q satisfying 0 ≤ gj (q) ≤ 1. That is, gj (q) represents the fraction of the ‘maximum’ prize Aj
for achieving rank j that an agent obtains if she produces a contribution with absolute quality q. Note
that the rank-order mechanism (A1 , A2 , . . . , An ) corresponds to setting gj to be the constant function
gj (q) = 1 for all j. Note also that if the same function gj = g is used to determine the fraction of the
prize Aj to be awarded for each rank, then lower-ranked agents win smaller fractions of their maximum
possible rewards, since qj decreases with rank and g is a non-decreasing function.
We note that a principal with access to cardinal measurements qi of the qualities of each submission
could conceivably use more general mechanisms even when modifying an existing rank-order mechanism, by allowing the function gj to depend not only on the quality of the corresponding j th -ranked
submission, but rather on the entire vector of qualities (q1 , . . . , qn ). We restrict ourselves to mechanisms
that use functions gj (qj ) for simplicity, both of analysis and implementation—in addition to leading to
simpler mechanisms, a principal might, practically speaking, prefer to announce a contest where the
8
prize awarded to a winner is contingent on the absolute quality only of her own submission, and not
on the absolute qualities of the submissions produced by her competitors.
Principal’s Objective. We assume that the principal’s objective is to maximize some utility function
W (q1 , . . . , qn ) such that if qi0 ≥ qi for all i, then W (q10 , . . . , qn0 ) ≥ W (q1 , . . . , qn ); i.e., W (·) is nondecreasing in the quality of the agents’ contributions. A consequence of this is that if agents all use
the same level of effort in equilibrium, then the principal’s expected utility is non-decreasing in the
symmetric pure strategy effort choice of the agents. Therefore, all such objectives are simultaneously
improved by eliciting higher equilibrium effort from the agents, assuming that agents follow a symmetric
equilibrium.
Homogeneity. Our model of costs ci (·) = c(·) assumes homogeneity amongst all potential contributors, corresponding to assuming that agents do not differ in their abilities but simply in the effort
that they choose to put in. While there are indeed settings where potential contributors may differ
widely in their abilities, there are also settings where it is effort, rather than ability, which dominates
the quality of the outcome produced; more significantly, the set of potential contestants may be selfselected to have rather similar abilities or expertise levels, and therefore have similar costs to producing
a particular quality. Furthermore, if agents do have heterogenous abilities that affect the final qualities
of their output, but are unsure of these true abilities prior to making their strategic effort choices, each
agent’s incentives will be identical to those in our model where abilities are randomly drawn from the
cumulative distribution function F (·) after the agents choose their effort levels. Therefore, our model
does capture many situations in which there is heterogeneity in the agents’ abilities, as long as agents
do not learn their abilities prior to choosing their effort levels.
3
Eliciting Higher Effort with Cardinal Information
Consider a contest structured as a rank-order tournament M(A1 , A2 , . . . , An ) where the j th -ranked
agent receives a (purely rank-based) prize Aj , and suppose that the principal can also measure the
cardinal qualities qi of agents’ submissions. In this section, we address the basic question of whether the
principal can use her knowledge of the qualities qi to provide stronger incentives for effort, and thereby
increase her equilibrium utility: we show in Theorem 3.2 that for any mechanism M(A1 , A2 , . . . , An ),
9
simply comparing each agent’s output quality against a threshold to decide whether or not to award
her the prize corresponding to her rank improves equilibrium effort relative to M(A1 , A2 , . . . , An ). We
first formally define threshold mechanisms.
Definition 3.1. A threshold mechanism M(A1 , t1 , A2 , t2 , . . . , An , tn ) is characterized by a set of prizes
Aj and thresholds tj such that the j th -ranked agent receives the prize Aj if the quality qj of her submission
satisfies qj ≥ tj , and receives no prize otherwise.
Note that threshold mechanisms are a special case of the family of mixed cardinal-ordinal mechanisms (g1 (q1 )A1 , . . . , gn (qn )An ) defined in §2, corresponding to setting each of the functions gj to be a
step function with a step at tj , i.e., gj (q) is the function gj (q) = 0 if q < tj and gj (q) = 1 for all q ≥ tj .
To prove Theorem 3.2, we will first need to characterize equilibrium effort in threshold mechanisms. In
Appendix A.1, we show that a necessary condition for an effort level e to constitute an equilibrium in
a threshold mechanism M(A1 , t1 , A2 , t2 , . . . , An , tn ) is:
¶
·µ
n−1
(1 − F (tj − e))j−1 F (tj − e)n−j f (tj − e)
c (e) =
v(Aj )
j−1
j=1
Z ∞ ·
(n − 1)!
+
(1 − F (²i ))j−1 F (²i )n−j−1
(j
−
1)!(n
−
j
−
1)!
tj −e
¸
¸
(n − 1)!
j−2
n−j
2
−
(1 − F (²i )) F (²i )
f (²i ) d²i
(j − 2)!(n − j)!
0
n
X
where we abuse notation by letting
(n−1)!
(j−2)!(n−j)!
(1)
≡ 0 when j = 1. The following theorem says that this
necessary condition is also sufficient for e to be an equilibrium level of effort when the cost function c
is adequately convex.
Theorem 3.1. Suppose c00 (·) > C where C = C(n, Aj , F ) is a constant independent of the thresholds tj .
Then, for any threshold mechanism M(A1 , t1 , A2 , t2 , . . . , An , tn ), a symmetric pure strategy equilibrium
exists, and is unique, with equilibrium effort given by the solution to the first-order condition (1).
All proofs are in the appendix. We note that the assumption that a player’s cost of effort is
sufficiently convex is a fairly mild one that has been widely used in various settings in prior literature.
Throughout the remainder of the paper, we will therefore assume that this condition holds so that
there is a unique symmetric pure strategy equilibrium described by the solution to (1).
10
We now use this characterization of equilibrium effort in threshold mechanisms to answer the question of whether using cardinal information to determine rewards can improve equilibrium outcomes.
Note that a rank-order mechanism M(A1 , A2 , . . . , An ) which rewards agents solely on the basis of their
relative performance is equivalent to a threshold mechanism in which the thresholds tj are all set to
−∞, since all agents’ output qualities always exceed this threshold and so every agent always receives
the prize Aj associated with her rank. If rewards based only on relative performance were optimal, then
a mechanism in which all the thresholds were equal to tj = −∞ would elicit higher equilibrium effort
than one which used thresholds bounded away from tj = −∞. We will show that slightly perturbing
all the thresholds away from −∞—setting some minimum required standard for agent’s output, even
if very low—strictly improves incentives for effort; this means that a threshold of −∞ corresponding
to pure rank-order mechanisms cannot be optimal. Before stating the theorem, we introduce notation
for the equilibrium effort in a threshold mechanism with equal thresholds t.
Definition 3.2 (e∗ (t)). Consider any rank-order mechanism (A1 , A2 , . . . , An ). We use e∗ (t, n) to denote
the equilibrium effort chosen by an agent in a threshold mechanism M(A1 , t1 , A2 , t2 , . . . , An , tn ) with
thresholds tj = t and rank-order rewards (A1 , A2 , . . . , An ), when there are n − 1 other agents who
compete in the contest4 . We drop the dependence on n, writing only e∗ (t), when the number of agents
is clear from the context.
Theorem 3.2. Suppose the noise density f (²) is increasing in ² for sufficiently negative values of ².
Consider any rank-order mechanism M(A1 , A2 , . . . , An ), and the associated family of threshold mechanisms M(A1 , t1 , A2 , t2 , . . . , An , tn ) with threshold tj = t for all j. Then the derivative of the equilibrium
effort e∗ (t) with respect to the threshold t is positive at t = −∞, so that incorporating cardinal information into agents’ rewards locally improves equilibrium effort in any rank-order mechanism.
Theorem 3.2 says that as long as the noise ²i which determines the observed qualities qi = ei + ²i
satisfies very mild conditions—that its density is increasing for adequately negative ²i < 0, a condition
satisfied, for example, by any noise density that is single-peaked at 0—there is a local improvement
to equilibrium effort e∗ (t) at t = −∞. This means that cardinal information can always be used to
improve the level of effort chosen by agents in equilibrium relative to a pure rank-order mechanism.
4
While e∗ (t, n) of course depends on the rank-order mechanism (A1 , A2 , . . . , An ) that the thresholds t are applied to,
we suppress this dependence for brevity.
11
4
Optimality of Threshold Mechanisms
A natural question following the result in Theorem 3.2 on local improvements in effort at the threshold t = −∞ is how to choose the optimal threshold t∗ that maximizes equilibrium effort. But before
addressing this question, we ask a more basic question—why threshold mechanisms (except for simplicity)? Theorem 3.2 shows that using cardinal information about outputs, even as coarsely as by
comparing output scores against a single threshold, can improve equilibrium effort. But cardinal measures of quality can be incorporated into a mechanism in a number of more informative ways—for
example, the reward (for each rank) might increase linearly with an agent’s output, or as a convex
function of her quality score (up to some maximum), possibly creating stronger incentives for effort by
making reward more strongly dependent on absolute quality qi = ei + ²i , whose mean increases with
effort. In general, given any set of prizes (A1 , A2 , . . . , An ), the principal could apply any reasonable
function gj (qj ) to the quality score qj of the j th -ranked agent to decide how much of the prize Aj she will
receive. What choice of functions gj (·) creates the strongest incentives for effort amongst all possible
non-decreasing functions gj ?
Our main result, Theorem 4.1, shows that there is a compelling reason beyond simplicity to use
threshold mechanisms—for any given rank-order mechanism M(A1 , A2 , . . . , An ), no other functions
gj (q) can incentivize higher equilibrium effort than the optimal step functions that increase from 0 to 1
at some threshold score, corresponding precisely to threshold mechanisms. Therefore, threshold mechanisms constitute an optimal modification of rank-order contests to incorporate cardinal information
about agent’s output qualities.
In analyzing this question, throughout we restrict attention to functions gj (q) for which there is some
small δj > 0 such that gj (q) may only assume values that are integral multiples of δj . This assumption
is realistic because in any practical application there will be some minimum unit of a currency that
represents the smallest possible amount by which one can change the value of an agent’s prize. For
example, if prizes were paid in US dollars, any prize would necessarily have to be some integral multiple
of some small fraction of a penny, as a principal would not be able to divide an agent’s prize further
than this. Thus in our analysis in Theorem 4.1 we assume throughout that gj (q) must be an integral
multiple of some small fraction δj . Under these conditions we prove the following result:
Theorem 4.1. Consider a rank-order tournament in which the agent who finishes in j th place is
12
awarded a prize gj (q)Aj , where gj (q) is a non-decreasing function satisfying 0 ≤ gj (q) ≤ 1 for all q.
There exist functions gj (q) of the form gj (q) = 0 for q < qj∗ and gj (q) = 1 for q ≥ qj∗ for some constants
qj∗ that incentivize the highest equilibrium effort amongst all possible mechanisms characterized by some
functions gj (q).
While there are potentially a variety of far more nuanced ways to incorporate cardinal information
in determining agents’ prizes in a mixed cardinal-ordinal mechanism, Theorem 4.1 says that there is
always an optimal mechanism with an exceedingly simple form—it awards the entire value of the j th
prize to the agent who finishes in j th place if this agent’s quality meets some threshold, and awards her
no prize at all otherwise. This result is quite surprising, since it would seem far more natural to use
a mechanism where an agent’s prize varies smoothly with the quality of her output than one where it
exhibits a sharp discontinuity with respect to quality, especially if agents are risk-averse—risk-averse
agents are likely to prefer a prize structure in which they have a good chance of receiving a moderate
prize over one in which they have a high chance of receiving nothing and a high chance of receiving
a large prize. Nonetheless, Theorem 4.1 shows that such threshold mechanisms can always incentivize
agents to choose optimal effort levels.
5
Optimal Thresholds
In the previous section, we saw that threshold mechanisms not only provide local improvements to
rank-order mechanisms, as shown in Theorem 3.2, but are in fact optimal amongst the class of all
modifications to rank-order mechanisms that use an agent’s absolute quality in addition to her rank to
determine her rewards (Theorem 4.1). We now return to the immediate question that followed Theorem
3.2—while slightly increasing the threshold above −∞ uniformly for all ranks improved equilibrium
effort over M(A1 , A2 , . . . , An ), what are the thresholds t∗j for each rank j = 1, . . . , n that maximize
equilibrium effort, and how do they depend on j and n?
We first address the question of how the optimal threshold t∗j for each rank j varies with j. The
proof of optimality of thresholds amongst all mechanisms of the form gj (qj )Aj did not imply that the
function gj must be the same for all j, and thus allowed for the possibility that the thresholds t∗j that
are optimal for each rank might be substantially different for each rank. However, while it could seem
13
intuitive that the optimal threshold might need to either consistently increase or consistently decrease
across ranks, this turns out not to be the case at all—we show below in Theorem 5.1 that the optimal
threshold for each rank is in fact identical for all possible ranks 1, . . . , n.
Theorem 5.1. Suppose the noise density f (·) is single-peaked at 0. Then, the optimal threshold
mechanism (A1 , t∗1 , . . . , An , t∗n ) applies the same threshold to each rank j for any monotone mechanism
M(A1 , A2 , . . . , An ), i.e., t∗j = t∗ for j = 1, . . . , n.
Theorem 5.1 says that optimally incorporating cardinal scores into a rank-order contest to maximize
effort is, in fact, even simpler than the method suggested by Theorem 4.1—a mechanism designer only
needs to compare all agents’ outputs against the same baseline, i.e., consider only threshold mechanisms
with equal thresholds, reducing the problem of finding the optimal modification of M(A1 , A2 , . . . , An )
to one of choosing a single optimal threshold t∗ for M(A1 , A2 , . . . , An ). We use the following notation
henceforth.
Definition 5.1. Consider any given rank-order mechanism (A1 , A2 , . . . , An ). We use t∗ (n) to denote
the optimal threshold that maximizes equilibrium effort e∗ (t, n) in M(A1 , t1 , A2 , t2 , . . . , An , tn ) with n
contestants. We suppress the dependence on n to simply use t∗ to denote the optimal threshold when n
is clear from the context.
Again, the optimal threshold t∗ (n) of course depends on the rank-order mechanism (A1 , A2 , . . . , An )
that the thresholds are applied to; we again suppress this dependence for brevity. The following
corollary, which is central to proving many of our subsequent results, follows immediately from the
proof of Theorem 5.1.
Corollary 5.1. Suppose that f (·) is single-peaked at 0. Then the optimal threshold that induces the
highest equilibrium effort in the threshold mechanism is equal to the equilibrium level of effort at that
threshold: e∗ (t∗ (n)) = t∗ (n).
While equilibrium effort is maximized at the optimal threshold t∗ , a mechanism designer without
access to precise information about the parameters of her population of contestants may not be able to
compute and use the optimal threshold t∗ for this population. Our next result addresses the question
of how the equilibrium effort e∗ (t) varies as a function of t used to modify a mechanism M for arbitrary
non-optimal thresholds t.
14
Theorem 5.2. Suppose that f (·) is single-peaked at 0. Then the equilibrium level of effort is greater
than the threshold and increasing in the threshold for t ≤ t∗ and lower than the threshold and decreasing
in the threshold for t > t∗ .
This result is potentially relevant to the question of identifying the optimal threshold t without
complete knowledge of the contestant population in situations where multiple iterations of a contest
will be held, as follows. Suppose a mechanism designer uses some particular threshold t, and observes
the qualities of the submissions elicited with that threshold. When the number of contestants n is large,
the mean of the elicited qualities qi = e∗ (t) + ²i provides a reasonable estimate of the equilibrium effort
e∗ (t), so that the mechanism designer can draw inferences about the likely levels of equilibrium effort
that agents chose. In particular, a principal who repeatedly runs such a contest would be able to make
probabilistic inferences as to whether equilibrium effort e∗ (t) was lower or higher than the threshold t
that was used. The theorem above indicates how the mechanism designer can use her estimate of e∗ (t)
to update the threshold for the next iteration of the contest: since equilibrium effort is both greater
than the threshold and increasing in the threshold when t < t∗ , if equilibrium effort was probably
greater than the threshold, the threshold t is likely to be below the optimal so that the principal should
increase the threshold, and vice versa.
5.1
Comparative Statics
We now address the question of how the optimal threshold t∗ varies with changes in the problem
parameters, specifically, with the number of contestants and changes in the prize structure. First we
consider comparative statics with respect to the number of agents. Our mechanisms M(A1 , A2 , . . . , An )
so far have been specified in terms of the prizes for each of the n ranks, where n is the number of players.
Since we want to now vary n, we assume that there is a fixed number k of prizes, A1 , A2 , . . . , Ak , and
the number of players n ≥ k. In this scenario, we prove the following result:
Theorem 5.3. Consider any given rank-order mechanism M with prizes A1 , . . . , Ak , and let t∗ (n)
denote the optimal threshold for M when there are n agents in the contest. If the noise density f (·)
is single-peaked at 0, then the optimal threshold t∗ (n) is decreasing in the number of players n for all
n ≥ k. Further, the equilibrium effort e∗ (t∗ (n)) in the optimal threshold mechanism also decreases with
n.
15
Next we ask how the optimal threshold varies with the number of prizes awarded. To formulate
this question meaningfully, we consider contests where the top k participants who meet the threshold
all receive the same prize, for some k, and consider two simple ways that the total number of prizes
might increase—first, where the total prize pool stays the same, but the prizes are split amongst a
larger number of players, and second, where the value of each prize stays the same, but more prizes of
this value are awarded (contingent on meeting the threshold). The optimal threshold varies predictably
with these changes in the prize structure, as the following theorem illustrates:
Theorem 5.4. Suppose that f (·) is single-peaked at zero and the number of prizes is less than the
number of players.
1. The optimal threshold t∗ in a contest with k equal prizes of value A increases with k.
2. The optimal threshold t∗ in a contest with k equal prizes of value A/k each also increases with k
if players are sufficiently risk-averse in the sense that the coefficient of absolute risk aversion on
v(·) is sufficiently large.
The intuition for this result has to do with how an agent’s incentives to try to meet the threshold
vary with the number of prizes. As the number of prizes increases in either of the two manners
considered in Theorem 5.4, the expected value of the prize that an agent obtains for meeting the
threshold unambiguously increases. Thus agents have a stronger incentive to try to meet the threshold
when the number of prizes increases, and agents will thus exert more effort in equilibrium when there
are a larger number of prizes. Since the optimal threshold is equal to the equilibrium level of effort, it
then follows that increasing the number of prizes also increases the optimal threshold.
6
Endogenous Entry
We have so far focused on situations where participation is exogenous, assuming that there is a set of
n contestants who strategically choose their effort levels (knowing n) to maximize their utility. While
this is a natural starting point for our analysis, there are many real-world situations in which entry is
an endogenous, strategic choice—that is, agents can also choose whether or not to participate at all in
addition to how much effort to exert, so that the number of competing contestants is not exogenously
16
fixed but rather is endogenously determined by agents’ strategic choices. In this section, we investigate
cardinal contests with endogenous entry.
We extend our model in §2 to incorporate endogenous entry as follows. Agents first simultaneously
decide whether or not to participate in the contest; an agent who does not participate incurs no cost
but also receives no prize, obtaining a utility of zero. An agent who does participate incurs a cost
depending on the effort level she chooses, and an expected benefit depending on her performance in
the contest; her final utility is exactly as in the model with exogenous entry5 in §2. After agents make
their simultaneous participation decisions, each agent observes how many other agents have chosen to
participate in the contest, and the agents then simultaneously decide how much effort to exert in the
contest. We note that in this model with endogenous entry, agents’ cost functions c(·) can be nonzero
at 0 (i.e., c(0) > 0), meaning that participating in the contest is allowed (though not required) to be
strictly more costly than not participating at all, even if one exerts almost no effort.
A set of participation decisions is an equilibrium if no agent can profitably deviate by making a
different participation decision, when taking into account how this deviation would affect her final
utility given that the remaining participants will update their level of effort to account for this change
in participation. Specifically, suppose k agents decide to participate in a given threshold mechanism
with threshold t, and recall the definition of equilibrium effort e∗ (t, n) from Definition 3.2. For k to
constitute an equilibrium level of participation, (i) the expected utility to each of these k agents when
they participate with effort e∗ (t, k) and the remaining agents do not participate must be nonnegative,
and (ii) none of the n − k non-participants can profitably deviate by participating and choosing any
effort level e ≥ 0 when these k agents are participating with effort e∗ (t, k + 1). Throughout we let n
denote the number of agents who must decide whether to participate in the contest and k denote the
number of agents who actually choose to participate in the contest, i.e., n is the number of potential
contestants and k is the number of actual participants. We first note that there indeed exists an
equilibrium to the endogenous entry game under threshold mechanisms:
Theorem 6.1. Suppose entry is endogenous. For every threshold mechanism, there is a pure strategy
equilibrium in which some agents participate with certainty and the remaining agents do not participate;
further, all participating agents choose the same level of effort.
5
Of course, note that this utility depends on the number of actual competitors (which was n in the exogenous entry
model since all agents participated); here the number of competitors will be determined endogenously as explained next.
17
Having determined that pure strategy equilibria of the above form continue to exist under endogenous entry in threshold mechanisms, we now investigate the nature of optimal thresholds. In general,
different choices of the threshold may result in different levels of participation in equilibrium, and in
general, the principal will face a tradeoff between the equilibrium number of participants and the effort
these participants choose to exert. In the following theorem, we address the question of how a principal
who desires a certain level of participation can choose the optimal threshold that maximize efforts from
these participants. Recall the definition of t∗ (n) from Definition 5.1.
Theorem 6.2. Suppose that f (·) is single-peaked at zero, and suppose that k is such that there exists
some threshold t(k) such that it is an equilibrium for exactly k agents to participate in the mechanism
(A1 , A2 , . . . , An ) modified by the threshold t. To maximize equilibrium effort subject to the constraint
that exactly k agents participate, the principal chooses the largest threshold less than or equal to t∗ (k)
at which there is an equilibrium in which exactly k agents participate.
Theorem 6.2 indicates that it will never be optimal for the principal to use a threshold greater than
t∗ (k) to maximize equilibrium effort if he wants exactly k agents to participate in equilibrium. We now
illustrate how equilibrium participation levels vary with the threshold t for such t ≤ t∗ (k).
Theorem 6.3. Suppose that f (·) is single-peaked at zero.
Fix a rank-order mechanism
M(A1 , A2 , . . . , An ), and restrict attention to thresholds t ≤ t∗ (k), where k denotes the equilibrium
number of participants. Suppose that at threshold t1 , there exists an equilibrium in which exactly k1
agents participate. Then for all thresholds t2 ≤ t1 , there exists some equilibrium in which exactly k2
agents participate for some k2 ≥ k1 .
Participation-effort tradeoffs. In general, different choices of threshold could lead to different equilibrium participation levels, since Theorem 6.3 indicates that equilibrium participation may increase as
a result of decreasing the threshold. These equilibria with lower thresholds and higher participation
will also lead to lower levels of effort—from Theorem 5.3, equilibrium effort decreases in the number
of contestants, and from Theorem 5.2, equilibrium effort is increasing in the threshold for thresholds
t ≤ t∗ (k) (by Theorem 6.2, these are the only thresholds that the principal should consider under
endogenous entry). Thus, while the principal will generally have several feasible levels of participation
that she can induce by appropriately choosing the threshold, she will also face a participation-effort
18
tradeoff—she can induce higher levels of participation by choosing a lower threshold, but this will come
at the cost of lower equilibrium effort. The choice of which point to choose in the participation-effort
tradeoff curve, and consequently the value of the optimal threshold, will be determined by how the
principal values participation versus effort, which in turn can depend strongly on the context in which
the contest is being held.
7
Conclusion
In this paper, we addressed the problem of how a principal running a contest might optimally incorporate cardinal information regarding the absolute qualities of contestants’ entries into an existing
rank-order prize structure, motivated by the observation that an increasing number of contests today
evaluate entries according to some numerical metric, capturing, for example, the performance of an
algorithm on a dataset, or of an intervention in a randomized trial. We found, surprisingly, that threshold mechanisms which simply compare a submission’s score against an absolute threshold—in fact, the
same threshold for each prize—are optimal amongst the class of mixed cardinal-ordinal mechanisms
where the j th -ranked submission receives a quality-dependent reward gj (qj )Aj with 0 ≤ gj (qj ) ≤ 1. This
result shows that using cardinal information as coarsely as by comparison against a single threshold
provides the optimal modification of a rank-order mechanism in terms of incentives for effort.
There are a number of practical considerations, beyond the equilibrium effort levels predicted by
a game-theoretic analysis, that might cause a principal to choose a particular prize structure for his
contest, such as simplicity, sponsorships of various prize levels, media or publicity considerations, and
so on. At the same time, the sponsor might wish to make the actual award of the promised prizes
(typically a small number of relatively large prizes) in such contests contingent on the absolute, rather
than merely the relative, performance of their submissions; this motivates the specific question we
ask about modifying given rank-order mechanisms. However, while we only considered the class of
mechanisms which modify the prize for the j th rank based on the output qj of the agent ranked in the
j th position, it is also very interesting to study the more general optimal contest design problem in such
contests with access to absolute measurements of quality—what mechanism M(q1 , . . . , qn ) incentivizes
the highest effort over all mechanisms with access to cardinal, and not just ordinal, information about
19
outputs6 ?
The answer to this question, of course, depends on the specifics of the model and the environment in
question—it is well-known that even when using only ordinal information, the structure of the optimal
contest depends quite strongly on the environment, such as agents’ risk preferences, constraints on
the prizes, and the objective function of the designer. For instance, the optimal contest when agents
are risk-averse might distribute multiple prizes while the optimal contest in the same model, except
with risk-neutral rather than risk-averse agents, is a winner-take-all contest; similarly the optimal prize
structure might vary depending on whether the principal wants to maximize the sum of all submissions’
qualities, or the quality of the best submission.
An interesting open question relevant to this question about overall optimal contests is the following.
Suppose a rank-order mechanism M(v1 , . . . , vn ) elicits higher equilibrium effort than another rank-order
mechanism M̂(v̂1 , . . . , v̂n ). We already know that the optimal cardinal contest for both M and M̂
modifies the rank-order mechanism via a threshold; call the corresponding optimal thresholds t∗ and
t̂∗ respectively. Does M(t∗ ) also elicit higher effort than M̂(t̂∗ )? An answer in the affirmative can
immediately address the problem of the overall optimal cardinal contest, which would then be the
optimal rank-order mechanism for the setting modified by the optimal threshold for that rank-order
mechanism.
A number of questions also arise from practical considerations regarding uncertainty—the optimal
threshold t∗ for a given rank-order contest depends on the parameters of the population, which the
designer might typically not have access to. While our comparative statics results in §5.1 provide
guidance on adjusting the threshold based on the elicited responses in an iteration of the contest,
how badly off can a principal be when choosing a suboptimal threshold? And if the designer were
to run several iterations of a contest, what is the optimal policy of exploring thresholds in terms of
overall discounted utility, especially in settings with endogenous entry? The robustness of threshold
mechanisms under uncertainty, and learning optimal thresholds, forms an interesting family of directions
for further exploration in this setting.
We conclude with an intriguing theoretical question. The results we obtain about the optimality of
threshold mechanisms, and the optimality of a single threshold for each rank, are strongly reminiscent
6
Chawla et al. (2012) addresses this question in a specific (but different) model; see §1.1.
20
of results from the theory of optimal auctions regarding the optimality of a single reserve price—a
threshold quality that an agent’s submission must exceed to receive the reward for a given rank is
essentially analogous to a reserve that an agent must beat in addition to beating a number of other
contestants. Is there a formal relation between the two results? For instance, it has been noted in the
context of position auctions for sponsored search advertising that, under certain regularity conditions,
the optimal mechanism is an auction in which the same reserve price is used for all positions (Ostrovsky
and Schwarz 2009). Various papers on contests have demonstrated tight relationships between the
optimal allocation of prizes in contests to all-pay auctions (Chawla et al. 2012; Che and Gale 2003;
DiPalantino and Vocnović 2009). What relationships, if any, exist between the nature of the optimal
thresholds in these rank-order mechanisms and the optimal reserve prices in position auctions? A
deeper understanding of the connection between threshold mechanisms and optimal auctions is an
interesting open direction for further research.
References
Archak, Nikolay. 2010. “Money, Glory and Cheap Talk: Analyzing Strategic Behavior of Contestants in Simultaneous Crowdsourcing Contests on TopCoder.com.” Proceedings of the Nineteenth
International Conference on the World Wide Web (WWW) 19: 21-30.
Archak, Nikolay and Arun Sundarajan. 2009. “Optimal Design of Crowdsourcing Contests.” Proceedings of the Thirtieth International Conference on Information Systems (ICIS) 30: 200.
Boudreau, Kevin J., Nicola Lacetera, and Karim R. Lakhani. 2011. “Incentives and Problem
Uncertainty in Innovation Contests: An Empirical Analysis.” Management Science 57(5): 843-863.
Cadsby, C. Bram, Fei Song, and Francis Tapon. 2007. “Sorting and Incentive Effects of Pay-forPerformance: An Experimental Investigation.” Academy of Management Journal 50(2): 387-405.
Carpenter, Jeffrey, Peter Hans Matthews and John Schirm. 2010. “Sorting and Incentive Effects of
Pay-for-Performance: An Experimental Investigation.” American Economic Review 100(1): 504-517.
Cavallo, Ruggiero and Shaili Jain. 2013. “Winner-Take-All Crowdsourcing Contests with Stochastic
Predictions.” Proceedings of the First AAAI Conference on Human Computation and Crowdsourcing
(HCOMP) 1.
21
Chawla, Shuchi, Jason D. Hartline, and Balasubramanian Sivan. 2012. “Optimal Crowdsourcing
Contests.” Proceedings of the Twenty-Third ACM-SIAM Symposium of Discrete Algorithms (SODA)
13: 856-868.
Che, Yeon-Koo and Ian Gale. 2003. “Optimal Design of Research Tournaments.” American
Economic Review 93(3): 646-671.
Chen, Yan, Teck-Hua Ho, and Yong-Mi Kim. 2010. “Knowledge Market Design: A Field Experiment at Google Answers.” Journal of Public Economic Theory 12(4): 641-664.
DiPalantino, Dominic and Milan Vocnović. 2009. “Crowdsourcing and All-Pay Contests.” Proceedings of the Tenth ACM Conference on Electronic Commerce (EC) 10: 119-128.
Dohmen, Thomas and Armin Falk. 2011. “Performance Pay and Multidimensional Sorting: Productivity, Preferences, and Gender.” American Economic Review 101(2): 556-590.
Eriksson, Tor and Marie Claire Villeval. 2008. “Performance Pay, Sorting, and Social Motivation.”
Journal of Economic Behavior and Organization 68(2): 412-421.
Fu, Qiang and Jingfeng Lu. 2012. “The Optimal Multi-Stage Contest.” Economic Theory 51(2):
351-382.
Ghosh, Arpita and R. Preston McAfee. 2012. “Crowdsourcing with Endogenous Entry.” Proceedings of the Twentieth International World Wide Web Conference (WWW) 20: 137-146.
Glazer, Amihai and Refael Hassin. 1988. “Optimal Contests.” Economic Inquiry 26(1): 133-143.
Green, Jerry R. and Nancy L. Stokey. 1983. “A Comparison of Tournaments and Contracts.”
Journal of Political Economy 91(3): 349-364.
Holmstrom, Bengt. 1982. “Moral Hazard in Teams.” Bell Journal of Economics 13(2): 324-340.
Konrad, Kai A. 2010. “Dynamic Contests.” Max Planck Institute for Intellectual Property Typescript.
Lazear, Edward P. 1989. “Pay Equality and Industrial Politics.” Journal of Political Economy
97(3): 561-580.
Lazear, Edward P. 2000. “Performance Pay and Productivity.” American Economic Review 90(5):
1346-1361.
Lazear, Edward P. and Sherwin Rosen. 1981. “Rank-Order Tournaments as Optimum Labor
Contracts.” Journal of Political Economy 89(5): 841-864.
22
Liu, Bin, Jingfeng Liu, Ruqu Wang, and Jun Zhang. 2013a. “Prize and Punishment: Optimal
Contest Design with Incomplete Information.” National University of Signapore Typescript.
Liu, Tracy Xiao, Jiang Yang, Lada A. Adamic, and Yan Chen. 2013b. “Crowdsourcing with All-Pay
Auctions: A Field Experiment on Taskcn.” University of Michigan Typescript.
Minor, Dylan B. 2013. “Increasing Revenue through Rewarding the Best (or not at all).” Northwestern University Typescript.
Moldovanu, Benny and Aner Sela. 2001. “The Optimal Allocation of Prizes in Contests.” American
Economic Review 91(3): 542-558.
Moldovanu, Benny and Aner Sela. 2006. “Contest Architecture.” Journal of Economic Theory
126(1): 70-96.
Moldovanu, Benny, Aner Sela, and Xianwen Shi. 2007. “Contests for Status.” Journal of Political
Economy 115(2): 338-363.
Morgan, John, Henrik Orzen and Martin Sefton. 2012. “Endogenous Entry in Contests.” Economic
Theory 51(2): 435-463.
Nalebuff, Barry J. and Joseph E. Stiglitz. 1983. “Prizes and Incentives: Towards a General Theory
of Compensation and Competition.” Bell Journal of Economics 14(1): 21-43.
Ostrovsky, Michael and Michael Schwarz. 2009. “Reserve Prices in Internet Advertising Auctions:
A Field Experiment.” Stanford Graduate School of Business Typescript.
Parreiras, Sérgio and Anna Rubinchik. 2010. “Contests with Three or More Heterogeneous Agents.”
Games and Economic Behavior 68(2): 703-715.
Riis, Christian. 2010. “Efficient Contests.” Journal of Economics and Management Strategy 19(3):
643-665.
Siegel, Ron. 2009. “All-Pay Contests.” Econometrica 77(1): 71-92.
Siegel, Ron. 2010. “Asymmetric Contests with Conditional Investments.” American Economic
Review 100(5): 2230-2260.
Siegel, Ron. 2012. “Participation in Deterministic Contests.” Economics Letters 116(3): 588-592.
Szymanski, Stefan and Tommaso M. Valletti. 2005. “Incentive Effects of Second Prizes.” European
Journal of Political Economy 21(2): 467-481.
Taylor, Curtis R. 1995. “Digging for Golden Carrots: An Analysis of Research Tournaments.”
23
American Economic Review 85(4): 872-890.
A
Appendix: Proofs of Main Results
A.1
Equilibrium Analysis of Threshold Mechanisms
We start by analyzing equilibria in threshold mechanisms, in order to understand equilibrium effort,
and therefore outcomes, in such modified rank-order mechanisms using cardinal information solely
via comparison against a threshold. We will begin by characterizing the effort levels that agents must
choose in any symmetric pure-strategy equilibrium, and then show that such a symmetric pure strategy
equilibrium in which all agents choose an effort level ei = e exists, and is unique, under mild conditions
on the shape of an agent’s cost of effort c(e).
Consider a given threshold mechanism M(A1 , t1 , A2 , t2 , . . . , An , tn ). Agent i’s expected utility in
M(A1 , t1 , A2 , t2 , . . . , An , tn ) from choosing effort ei , when all other agents are exerting effort e, is the
difference between i’s expected prize minus the cost of her effort c(ei ), where i’s prize depends both on
the rank of the quality of her output qi = ei + ²i relative to the qualities of the other agents’ outputs
qi0 = ei0 + ²i0 , and whether her output qi exceeds the threshold tj specified for her rank j. In order for
it to be an equilibrium for all agents to choose effort level ei = e for some e, it must be a best response
for a particular agent i to choose effort ei = e when all the remaining n − 1 agents choose ej = e. We
thus begin by considering the probability that agent i’s output has rank j when she chooses effort ei
and all other agents choose effort e. If ²[j] denotes the j th -largest of the noise terms ² drawn by the
n − 1 remaining agents, then agent i will have the j th -highest quality output if and only if
e + ²[j] ≤ ei + ²i ≤ e + ²[j−1] ,
that is, if and only if
²[j] ≤ ei + ²i − e ≤ ²[j−1] .
The probability of this event, for a given draw of ²i , is the probability that the number ei + ²i − e
lies between the random values of ²[j] and ²[j−1] . Thus the probability of this event is equal to the
24
probability that exactly j − 1 of n − 1 randomly drawn values from the cumulative distribution function
F (·) exceed ei + ²i − e. Using standard expressions for binomial probabilities, we know that this event
takes place with probability
µ
¶
n−1
(1 − F (ei − e + ²i ))j−1 F (ei − e + ²i )n−j .
j−1
(2)
Now suppose agent i produces output qi = ei + ²i . To actually receive the prize Aj in the threshold mechanism M(A1 , t1 , A2 , t2 , . . . , An , tn ), i’s output must also exceed the threshold tj specified by
M(A1 , t1 , A2 , t2 , . . . , An , tn ), i.e., satisfy ei + ²i ≥ tj . Therefore, the probability that agent i finally
receives the j th prize with value v(Aj ), unconditional on the realization of ²i , is the integral of the
probability in equation (2) over ²i ≥ tj − ei :
µ
¶
n−1
Pr(Aj | ei , e) =
(1 − F (ei − e + ²i ))j−1 F (ei − e + ²i )n−j f (²i ) d²i .
j
−
1
tj −ei
Z
∞
Thus an agent’s total expected utility from exerting effort ei , when other agents choose e, is the
sum of her expected benefit over all ranks j minus her cost c(ei ):
E[u(ei , e)] =
n
X
v(Aj ) Pr(Aj | ei , e) − c(ei )
j=1
=
k
X
j=1
Z
∞
v(Aj )
tj −ei
µ
¶
n−1
(1 − F (ei − e + ²i ))j−1 F (ei − e + ²i )n−j f (²i ) d²i − c(ei ).
j−1
For an effort level e to be a symmetric pure-strategy equilibrium, it must be a best response for
agent i to choose effort ei = e when all the remaining n − 1 agents choose effort e, i.e., the expected
utility E[u(ei , e)] above must be maximized at ei = e. A necessary condition for ei = e to maximize
E[u(ei , e)] is that the derivative of i’s expected utility, evaluated at ei = e is zero, i.e.:
25
¯
·µ
¶
n
X
∂E[u(ei , e)] ¯¯
n−1
=
v(Aj )
(1 − F (tj − e))j−1 F (tj − e)n−j f (tj − e)
¯
∂ei
j−1
ei =e
j=1
Z ∞ ·
(n − 1)!
(1 − F (²i ))j−1 F (²i )n−j−1
+
(j
−
1)!(n
−
j
−
1)!
tj −e
¸
¸
(n − 1)!
j−2
n−j
2
−
(1 − F (²i )) F (²i )
f (²i ) d²i − c0 (e)
(j − 2)!(n − j)!
= 0,
where we abuse notation by defining
(3)
(n−1)!
(j−2)!(n−j)!
to be zero when j = 1. The first order condition above is
a necessary condition for effort ei = e for agent i to be a best response to effort choice e by the remaining
agents; however, in order for effort levels e to constitute a symmetric pure-strategy equilibrium, we
also need these conditions to be sufficient. We show below that these first-order conditions are indeed
sufficient for ei to maximize E[u(ei , e)] as long as a player’s cost of effort c(·) is sufficiently convex.
Furthermore, under these conditions, any such symmetric pure-strategy equilibrium is unique.
Theorem A.1. Suppose c00 (·) > C where C = C(n, Aj , F ) is a constant independent of the thresholds tj .
Then, for any threshold mechanism M(A1 , t1 , A2 , t2 , . . . , An , tn ), a symmetric pure strategy equilibrium
e exists, and is unique, where e is the solution to the first-order condition (1).
Proof. If the mechanism designer uses a threshold mechanism in which the agent who finishes in j th
place receives a prize if and only if this agent’s observed quality exceeds tj , then an agent i’s expected
utility from exerting effort ei when all other agents are exerting effort e is given by:
E[ui ] =
n Z
X
j=1
µ
¶
n−1
v(Aj )
(1 − F (ei − e + ²i ))j−1 F (ei − e + ²i )n−j f (²i ) d²i − c(ei )
j
−
1
tj −ei
∞
If we let yj (e, ei , ²i ) ≡
¡n−1¢
j−1
(1 − F (ei − e + ²i ))j−1 F (ei − e + ²i )n−j denote the probability that agent
i finishes in j th place given that all other agents exert effort e, agent i exerts effort ei , and the value
of agent i’s noise term is ²i , then we can alternatively rewrite agent i’s expected utility from exerting
effort ei when all other agents are exerting effort e as:
26
E[ui ] =
n Z
X
j=1
∞
v(Aj )yj (e, ei , ²i )f (²i ) d²i − c(ei )
tj −ei
From this it follows that the derivative of the agent’s utility with respect to ei is given by the
following expression:
n
X
Z
∞
v(Aj )[yj (e, ei , tj − ei )f (tj − ei ) +
tj −ei
j=1
∂yj (e, ei , ²i )
f (²i ) d²i ] − c0 (ei )
∂ei
Similarly the second derivative of the agent’s utility with respect to ei is given by the following
expression:
¯
∂[yj (e, ei , tj − ei )]
∂yj (e, ei , ²i ) ¯¯
v(Aj )[−yj (e, ei , tj − ei )f (tj − ei ) +
f (tj − ei ) +
f (tj − ei )
¯
∂ei
∂ei
²i =tj −ei
j=1
Z ∞ 2
∂ yj (e, ei , ²i )
f (²i ) d²i ] − c00 (ei ),
+
2
∂e
tj −ei
i
n
X
where
0
∂[yj (e,ei ,tj −ei )]
∂ei
∂yj (e,ei ,²i )
|²i =tj −ei
∂ei
denotes the partial derivative of yj (e, ei , tj − ei ) with respect to ei , and
denotes the partial derivative of yj (e, ei , ²i ) with respect to ei evaluated at ²i = tj − ei .
But yj (e, ei , ²i ) is just the probability that an agent finishes in j th place for given values of e, ei , and ²i
and is therefore a continuous and bounded function with bounded first and second derivatives (for fixed
and finite n). Similarly, f (·) is just a continuous and bounded density with bounded first derivative.
From this it follows that there exists some bound C independent of e, ei , and all the thresholds tj such
that
¯
∂yj (e, ei , ²i ) ¯¯
∂[yj (e, ei , tj − ei )]
f (tj − ei )
f (tj − ei ) +
v(Aj )[−yj (e, ei , tj − ei )f (tj − ei ) +
¯
∂e
∂e
i
i
²
=t
−e
i
j
i
j=1
Z ∞ 2
∂ yj (e, ei , ²i )
+
f (²i ) d²i ] < C
∂e2i
tj −ei
n
X
0
always holds. Thus if c00 (e) > C for all e, then the second derivative of the agent’s utility with respect
to effort is always negative, meaning if the first order conditions for a given level of effort ei to be a
local optimum are satisfied, then this level of effort ei is also a global optimum. From this it follows
27
that if c(·) is a sufficiently convex function, then any level of effort e that satisfies equation (1) is indeed
a level of effort that can be sustained in a symmetric pure strategy equilibrium.
To prove that a symmetric pure strategy equilibrium exists, and is unique, it thus suffices to show
that there exists a unique value of e that satisfies equation (1). To see that such a value exists, note
that when e = 0, then c0 (e) = 0, but the derivative of a player’s expected prize with respect to effort is
positive, so the derivative in equation (1) is positive. But in the limit as e → ∞, c0 (e) → ∞, but the
derivative of a player’s expected prize with respect to effort must remain bounded, so the derivative
in equation (1) becomes negative. Since the expression in equation (1) is a continuous function of e,
it then follows from the intermediate value theorem that there exists some value of e for which this
equation is satisfied with equality.
To see that this equilibrium is unique for sufficiently convex c(·), note that we can rewrite the
condition that must be satisfied for e to be an equilibrium as
n
X
Z
∞
v(Aj )[yj (e, e, tj − e)f (tj − e) +
tj −e
j=1
¯
∂yj (e, ei , ²i ) ¯¯
f (²i ) d²i ] − c0 (e) = 0.
¯
∂ei
ei =e
(4)
The derivative of the left-hand side of this equation with respect to e is then equal to
¯
∂[yj (e, e, tj − e)]
∂yj (e, ei , ²i ) ¯¯
v(Aj )[−yj (e, e, tj − e)f (tj − e) +
f (tj − e) +
f (tj − e)
¯
∂e
∂e
i
e
=e,²
=t
−e
i
i
j
j=1
¯
·
¸
Z ∞
¯
∂ ∂yj (e, ei , ²i ) ¯
+
f (²i ) d²i ] − c00 (e)
¯
∂ei
tj −e ∂e
ei =e
n
X
0
Again yj (e, ei , ²i ) is just the probability that an agent finishes in j th place for given values of e, ei ,
and ²i and is therefore a continuous and bounded function with bounded first and second derivatives.
Similarly, f (·) is just a continuous and bounded density with bounded first derivative. From this it
follows that there exists some bound C independent of e, ei , and all the thresholds tj such that
¯
∂yj (e, ei , ²i ) ¯¯
∂[yj (e, e, tj − e)]
f (tj − e)
f (tj − e) +
v(Aj )[−yj (e, e, tj − e)f (tj − e) +
¯
∂e
∂ei
ei =e,²i =tj −e
j=1
¯
¸
·
Z ∞
∂ ∂yj (e, ei , ²i ) ¯¯
+
f (²i ) d²i ] < C
¯
∂ei
tj −e ∂e
ei =e
n
X
0
28
Thus if c00 (e) > C for all e, then the left-hand side of equation (4) is an decreasing function of y,
meaning there is at most one solution to this equation. From this it follows that if c(·) is a sufficiently
convex function, then there exists a unique pure strategy equilibrium.
A.2
Eliciting Higher Effort with Cardinal Information
Proof of Theorem 3.2: Suppose the mechanism designer uses a threshold mechanism in which the
agent who finishes in j th place receives a prize if and only if this agent’s observed quality exceeds t.
We know from equation (4) applied to the special case where tj = t for all j that in order for it to be a
symmetric pure strategy equilibrium for all agents to exert a level of effort e, it must be the case that
n
X
Z
∞
v(Aj )[yj (e, e, t − e)f (t − e) +
t−e
j=1
where yj (e, ei , ²i ) ≡
¡n−1¢
j−1
¯
∂yj (e, ei , ²i ) ¯¯
f (²i ) d²i ] = c0 (e),
¯
∂ei
ei =e
(5)
(1 − F (ei − e + ²i ))j−1 F (ei − e + ²i )n−j . If increasing the threshold t would
increase the left-hand side of this equation, then equilibrium effort increases. Differentiating the lefthand side of this equation with respect to t gives
n
X
v(Aj )yj (e, e, t − e)f 0 (t − e)
j=1
which is positive for sufficiently negative values of t because t − e → −∞ in equilibrium in the limit as
t becomes arbitrarily negative (since equilibrium effort is certainly bounded and finite), and the fact
that t − e → −∞ in equilibrium in the limit as t becomes arbitrarily negative implies f 0 (t − e) > 0
for sufficiently negative values of t. Thus increasing the value of the threshold from t = −∞ to some
greater number increases the left-hand side of equation (5) and therefore also increases equilibrium
effort. ¤
A.3
Optimality of Threshold Mechanisms
Proof of Theorem 4.1: Consider a set of ladder functions lj (q) that are characterized by a set of mj
∗
∗
∗
∗
, gj (q) = rm,j for
for each of the j prizes such that gj (q) = r0,j for q < q1,j
< . . . < qm
< q2,j
cutoffs q1,j
j ,j
29
∗
∗
∗
all q ∈ [qm,j
, qm+1,j
), and gj (q) = rmj ,j for q ≥ qm
, where 0 ≤ r0,j < r1,j < . . . < rmj ,j ≤ 1. Note that
j ,j
any non-decreasing function gj (q) such that gj (q) is an integral multiple of δj and 0 ≤ gj (q) ≤ 1 for all
q can be written in this form. Thus in order to prove that threshold mechanisms are optimal amongst
the set of all mechanisms, it suffices to show that threshold mechanisms are also optimal amongst the
set of all mechanisms induced by ladder functions.
Once again let yj (e, ei , ²i ) denote the probability that agent i finishes in j th place for a given
realization of ²i given that agent i exerts effort ei and all other agents exert effort e. If the prize for
being ranked in the j th place with a contribution of quality q is lj (q)Aj for some ladder function lj (q),
then an agent i’s expected utility from exerting effort ei when all other agents are exerting effort e is
E[ui ] =
n Z
X
j=1
=
∞
v(gj (ei + ²i )Aj )yj (e, ei , ²i )f (²i ) d²i − c(ei )
−∞
mj Z
n X
X
j=1 k=0
∗
qk+1,j
−ei
∗ −e
qk,j
i
v(rk,j A)yj (e, ei , ²i )f (²i ) d²i − c(ei )
∗
∗
where we abuse notation by letting q0,j
≡ −∞ and qm
≡ ∞. At a symmetric pure-strategy
j +1,j
equilibrium, all agents, and specifically agent i, choose effort e, and this is a best response, i.e., the
derivative of ui with respect to ei must be zero at ei = e. That is, the equilibrium effort e must satisfy
the first order conditions given below:
0
c (e) =
mj
n X
X
∗
∗
∗
∗
v(rk,j A)[(yj (e, e, qk,j
− e)f (qk,j
− e) − yj (e, e, qk+1,j
− e)f (qk+1,j
− e))
j=1 k=0
Z
+
∗
qk+1,j
−e
∗ −e
qk,j
¯
∂yj (e, ei , ²i ) ¯¯
f (²i ) d²i ].
¯
∂ei
ei =e
(6)
But note that the right-hand side of this equation is a linear function of v(rk,j A) for all k and j.
From this it follows that for all k ≤ mj , the right-hand side of this equation is either non-decreasing
in rk,j or non-increasing in rk,j . Thus if mj ≥ 2, then one can instead set the value of r1,j to either be
equal to r0,j or r2,j without decreasing the right-hand side of equation (6), meaning that one can make
this change without decreasing equilibrium level of effort.
But setting the value of r1,j to equal to r0,j or r2,j would be equivalent to replacing the ladder
30
function lj (q) with a ladder function that has mj − 1 points of discontinuity rather than mj points of
discontinuity. From this it follows that if one is using a mechanism based on ladder functions lj (q) such
that some lj (q) has mj ≥ 2 points of discontinuity, then the mechanism designer can induce at least
as large a level of effort by instead using some mechanism based on ladder functions such that lj (q)
has mj − 1 points of discontinuity. By induction, it then follows that the mechanism designer can also
induce at least as large a level of effort by instead using some mechanism based on ladder functions
such that each lj (q) has no more than one point of discontinuity.
To complete the proof, it suffices to show that if the mechanism designer is using a mechanism
based on ladder functions lj (q) that each have no more than one point of discontinuity, i.e., mj ≤ 1
for all j, then these single-step ladder functions must correspond to threshold mechanisms, i.e., that
r0,j ∈ {0, 1} and r1,j ∈ {0, 1}. Since equilibrium effort is again given by the solution to equation (6),
and the right-hand side of this equation is a linear function of v(rk,j A) for all k and j, we have again
that the right-hand side of this equation is either non-decreasing in r0,j or non-increasing in r0,j . Thus
if lj (q) has exactly one point of discontinuity, then one can set the value of r0,j to either be equal to 0
or r1,j without decreasing the right-hand side of equation (6), meaning that one can make this change
without decreasing the equilibrium level of effort.
Now if r0,j = 0, then the same argument illustrates that one can set the value of r1,j to either be
equal to 0 or 1 without decreasing the right-hand side of equation (6), meaning that one can make this
change without decreasing the equilibrium level of effort. And if r0,j = r1,j , then lj (q) has no points of
discontinuity, and the same argument again illustrates that one can set the value of r0,j = r1,j to be
either 0 or 1 without decreasing the right-hand side of equation (6), meaning that one can make this
change without decreasing the equilibrium level of effort. By combining these facts, it follows that the
mechanism designer can induce the agents to exert at least as much effort in equilibrium by using a
threshold mechanism in which r0,j ∈ {0, 1} and r1,j ∈ {0, 1}. The result then follows. ¤
A.4
Optimal Thresholds
Proof of Theorem 5.1: We know from equation (4) in the proof of Theorem 3.1 that if we define
¡ ¢
yj (e, ei , ²i ) ≡ n−1
(1 − F (ei − e + ²i ))j−1 F (ei − e + ²i )n−j , then the effort level e in a symmetric
j−1
pure-strategy equilibrium must satisfy
31
n
X
j=1
Z
∞
v(Aj )[yj (e, e, tj − e)f (tj − e) +
tj −e
¯
∂yj (e, ei , ²i ) ¯¯
f (²i ) d²i ] = c0 (e).
¯
∂ei
ei =e
(7)
In the optimal threshold mechanism, the thresholds tj must be chosen in such a way as to make the
left-hand side of this equation as large as possible for all e. A necessary condition for this is that the
derivative of the left-hand side of this equation with respect to tj must be zero for all j in the optimal
mechanism. When we differentiate the left-hand side of this equation with respect to tj , the only term
that remains is
v(Aj )yj (e, e, tj − e)f 0 (tj − e).
(8)
Now note that the principal would never have an incentive to choose a value of tj that is equal to
−∞ or ∞ for all j. To see this, note that for values of tj that are arbitrarily negative, it must be
the case that tj − e < 0 in equilibrium, meaning f 0 (tj − e) > 0 and the derivative of the left-hand
side of equation (7) with respect to tj is positive. Thus for any sufficiently negative values of tj , the
principal can always increase the left-hand side of equation (7) by increasing tj , and thereby increase
the equilibrium level of effort. From this it follows that a threshold of tj = −∞ can never be optimal for
any j. A similar argument shows that a threshold of tj = ∞ can also never be optimal for any j. Thus
there is some large finite value of T > 0 such that all thresholds tj ∈
/ [−T, T ] are dominated by some
threshold tj ∈ [−T, T ] for all j. Since this set of feasible thresholds is compact, and the equilibrium
level of effort varies continuously with the thresholds (from the fact that the equation characterizing
equilibrium effort as a function of the thresholds varies continuously with effort and the thresholds), it
then follows that some set of optimal thresholds exists.
Now if the optimal threshold tj is not equal to −∞ or ∞, then it must be the case that the derivative
in equation (8) is zero at the optimal tj . Now we know that f (·) is single-peaked at 0, so when tj is
not equal to −∞ or ∞, the above derivative is only zero when tj = e. Thus the optimal thresholds
satisfy tj = e for all j and the optimal thresholds are always equal for all prizes. ¤
Proof of Theorem 5.2: Note that a player i’s expected value for her prize from exerting effort ei when
all other players are exerting effort e is just equal to the sum, over all j = 1, . . . , n − 1, of the difference
between the value the player obtains from receiving the j th prize Aj and her value from the j + 1th
32
prize Aj+1 , multiplied by the probability that she finishes in at least j th place and meets the threshold
t. Now let Gj (q) denote the probability that no more than j − 1 of the n − 1 values of ²k are greater
than q. Then the probability that agent i finishes in at least j th place for any given realization of ²i
when she exerts effort ei and all other players are exerting effort e is Gj (ei − e + ²i ). From this it follows
that the probability that agent i finishes in at least j th place and meets the threshold t unconditional
on the realization of ²i is
Z
∞
Gj (ei − e + ²i )f (²i ) d²i .
t−ei
By using the insights in the previous paragraph, it follows that player i’s expected utility from
exerting effort ei when all other players are exerting effort e is
k
X
Z
∞
(v(Aj ) − v(Aj+1 ))
Gj (ei − e + ²i )f (²i ) d²i − c(ei )
t−ei
j=1
By differentiating this expression with respect to ei , setting the derivative equal to zero, and using
the fact that ei = e must hold in any symmetric pure strategy equilibrium, it follows that the following
relationship must be satisfied by the equilibrium level of e:
Z
k
X
(v(Aj ) − v(Aj+1 ))[Gj (t − e)f (t − e) +
∞
t−e
j=1
dGj (²)
f (²) d²] = c0 (e).
d²
(9)
Now the derivative of the left-hand side of this equation with respect to t − e is
k
X
(v(Aj ) − v(Aj+1 ))Gj (t − e)f 0 (t − e),
j=1
which is positive when t−e < 0. Note that if t < t∗ , where t∗ denotes the optimal threshold, then it must
be the case that the equilibrium effort corresponding to that threshold, e∗ (t), satisfies t < e∗ (t) < t∗
for the following reason. If we had e = t, then the value of the left-hand side of equation (9) would
be the same at t as it is at t∗ but the value of the right-hand side would be strictly lower, meaning
the left-hand side of equation (9) would be greater than the right-hand side of equation (9). Similarly,
if we had e = t∗ , then the right-hand side of equation (9) would be the same at t as it is at t∗ , but
the left-hand side would be strictly lower at t than it is at t∗ (since the derivative of the left-hand side
33
with respect to t − e is positive when t − e < 0), meaning the left-hand side of equation (9) would be
smaller than its right-hand side. By combining these facts, the continuity of f (·), f 0 (·), and G(·), and
the intermediate value theorem, it follows that there must exist some e∗ (t) satisfying t < e∗ (t) < t∗
such that the left-hand side of equation (9) and the right-hand side of equation (9) are equal. Since
the solution to this equation is unique, it then follows that if t < t∗ , then equilibrium effort satisfies
t < e∗ (t) < t∗ .
Next we seek to show that equilibrium effort is increasing in t when t < t∗ . To prove this, consider
two thresholds t1 and t2 such that t1 < t2 < t∗ . It suffices to show that e(t1 ) < e(t2 ) in equilibrium. To
see that this holds, note that if e(t2 ) is the equilibrium level of effort for a given t2 , then it must be the
case that equation (9) is satisfied with equality when t = t2 and e = e(t2 ). Now if e(t1 ) ≥ e(t2 ), then it
must be the case that the left-hand side of equation (9) is lower when t = t1 and e = e(t1 ) than when
t = t2 and e = e(t2 ) since the derivative of the left-hand side of this equation with respect to t − e is
positive when t − e < 0. And it must also be the case that the right-hand side of equation (9) is no
smaller when e = e(t1 ) than when e = e(t2 ) since the right-hand side of this equation is increasing in e
(recall that c is convex). But this would imply that the left-hand side of equation (9) is lower than the
right-hand side of this equation when t = t1 and e = e(t1 ), contradicting the possibility that e(t1 ) is an
equilibrium level of effort. Thus e(t1 ) ≥ e(t2 ) cannot hold and e(t1 ) < e(t2 ) must hold in equilibrium.
Next we show that if t > t∗ , then equilibrium effort satisfies e∗ (t) < t∗ < t. To see this, note
that the definition of t∗ is that t∗ is the threshold that results in the highest equilibrium effort. Thus
e∗ (t) < e∗ (t∗ ) for any t 6= t∗ . Since e∗ (t∗ ) = t∗ from Corollary 5.1, it then follows that e∗ (t) < t∗ < t.
Finally, we show that if t > t∗ , then equilibrium effort is decreasing in t. To see this, consider some
thresholds t1 and t2 satisfying t∗ < t1 < t2 . Note that when t = t1 and e = e(t1 ), the equilibrium
condition equation (9) is satisfied. Now t1 > e(t1 ), so t1 − e(t1 ) > 0, and we know that the derivative
of the left-hand side of equation (9) with respect to t is negative. Thus when t = t2 and e = e(t1 ), the
left-hand side of equation (9) is less than the right-hand side of equation (9). But when t = t2 and
e = 0, then the left-hand side of equation (9) is greater than the right-hand side of equation (9) (since
c0 (0) = 0). By the intermediate value theorem, it then follows that there exists some e(t2 ) ∈ (0, e(t1 ))
such that equation (9) is satisfied with equality when t = t2 and e = e(t2 ). Thus t∗ < t1 < t2 implies
e(t2 ) < e(t1 ), meaning that if t > t∗ , then equilibrium effort is decreasing in t. ¤
34
Proof of Theorem 5.3: Recall from equation (9) that if Gj (q) denotes the probability that no more
than j − 1 of the n − 1 values of ²k are greater than q, then the following equality must hold for the
equilibrium effort e when the principal uses the threshold t:
Z
k
X
(v(Aj ) − v(Aj+1 ))[Gj (t − e)f (t − e) +
∞
t−e
j=1
dGj (²)
f (²) d²] = c0 (e).
d²
From Corollary 5.1, the equilibrium effort e(t) at the optimal threshold t = t∗ will satisfy t∗ = e(t∗ ),
so that t − e = 0 in the equation above. Substituting, we have that equilibrium effort in the optimal
threshold mechanism is the solution to
Z
k
X
(v(Aj ) − v(Aj+1 ))[Gj (0)f (0) +
∞
0
j=1
dGj (²)
f (²) d²] = c0 (e).
d²
(10)
Now, Gj (²) is the probability that no more than j − 1 of n − 1 random draws of ²k are greater
than ² and
dGj (²)
d²
represents the density corresponding to this distribution. Write Gj (²; n) to denote the
dependence of Gj on n. Then, the distribution Gj (²; n0 ) first order stochastically dominates Gj (²; n) for
all n0 > n. Further, f (²) is non-increasing in ² for all ² ≥ 0, since we assumed that f (·) is single-peaked
at 0. From this it follows that increasing n decreases the value of the expression
Z
∞
Gj (0)f (0) +
0
dGj (²)
f (²) d²
d²
for all j, since Gj (0; n) is smaller for larger n, and the second term is the expected value of the nonincreasing function f (²) with respect to the density
dGj (²;n)
,
d²
which also decreases with increasing n (by
stochastic dominance). Therefore, for equality to hold in equation (10), the equilibrium effort e(n) must
be such that c0 (e(n)) also decreases with n, i.e., e(n) decreases with n. Since the equilibrium effort
in the optimal threshold mechanism decreases with the number of players, and the optimal threshold
equals equilibrium effort in the optimal threshold mechanism, by Corollary 5.1, the optimal threshold
t∗ (n) is also decreasing in n. ¤
Proof of Theorem 5.4: We know from equation (10) in the proof of the previous theorem that if
Gj (q) denotes the probability that no more than j − 1 of the n − 1 other values of ²k are greater than q,
then the following condition must be satisfied by the equilibrium level of effort e in any pure strategy
35
equilibrium of the optimal threshold mechanism:
Z
k
X
(v(Aj ) − v(Aj+1 ))[Gj (0)f (0) +
∞
0
j=1
dGj (²)
f (²) d²] = c0 (e).
d²
Substituting in the fact that Aj = Ak for j ≤ k and Aj = 0 for j > k, it then follows that the
equilibrium level of effort e satisfies the following equation in any pure strategy equilibrium:
Z
∞
v(Ak )[Gk (0)f (0) +
0
dGk (²)
f (²) d²] = c0 (e).
d²
which can be rewritten as
v(Ak )E²∼Gk [f (max{0, ²})] = c0 (e)
(11)
Now if one increases the number of prizes by awarding additional prizes that are the same as
those originally awarded to agents who finished in the top k and met the threshold, then v(Ak ) is
independent of k. And if one increases the number of prizes by splitting the same total prize pool
amongst a larger number of players, then in the limit as the coefficient of absolute risk aversion on
v(·) becomes arbitrarily large,
v(Ak )
v(A)
becomes arbitrarily close to 1, and v(Ak ) approaches a function
that is also independent of k. But we also know that E²∼Gk [f (max{0, ²})] is increasing in k since the
fact that Gk (²) is a distribution corresponding to the probability that no more than k − 1 of the n − 1
values of ²j are greater than ² implies that Gj first order stochastically dominates Gk for all j < k.
But we also know that f (max{0, ²}) is decreasing in ² for all ² > 0. By combining these facts, it
follows that E²∼Gk [f (max{0, ²})] is increasing in k. Thus under either of the conditions of the theorem,
it must be the case that v(Ak )E²∼Gk [f (max{0, ²})] is increasing in k. By combining this fact with
equation (11), it then follows that under either of these conditions, equilibrium effort in the optimal
threshold mechanism is increasing in k. Since the optimal threshold is equal to equilibrium effort in the
optimal threshold mechanism, it then follows that the optimal threshold is also increasing in k under
the conditions of the theorem. ¤
36
A.5
Endogenous Entry
Proof of Theorem 6.1: We know from our results on exogenous entry in Theorem 3.1 that when
exactly k agents participate in a threshold mechanism with rank-order rewards (A1 , A2 , . . . , An ) and
threshold t, there is a unique pure strategy equilibrium in which all these k agents exert the same level
of effort e∗ (t, k) (under appropriate conditions on c(·)). Since each agent observes the number of other
participants prior to choosing her effort level, we will assume henceforth that if k agents participate
(for any 0 ≤ k ≤ n), each of these agents chooses this equilibrium level of effort e∗ (t, k).
Now let k ∗ = k ∗ (t) denote the largest non-negative integer less than or equal to n such that if
exactly k ∗ agents participate with effort e∗ (t, k ∗ ), then no participating agent can profitably deviate
by not participating in the contest. Some such k ∗ exists because if k ∗ = 0, then this condition is
vacuously satisfied. In this case, it is an equilibrium for exactly k ∗ agents to participate in this contest,
for the following reason. When k ∗ agents participate, then no participant can profitably deviate by not
participating in the contest, by the definition of k ∗ . Also, no non-participating agent can profitably
deviate by participating in the contest—if an additional agent participated in the contest, then there
would be exactly k ∗ + 1 agents participating, and each of these agents would choose effort e∗ (t, k ∗ + 1).
But by the definition of k ∗ , we would then know that none of these participating agents would prefer to
participate (since all agents are symmetric), meaning it was never profitable for this non-participating
agent to deviate in the first place. Thus there exists an equilibrium in which exactly k ∗ agents participate
and all of these agents exert the same level of effort upon participating. ¤
Proof of Theorem 6.2: If setting the threshold to t∗ (k) would not create an incentive for one of the k
participating agents not to participate, then we know from our results on exogenous entry that such a
threshold would be optimal. If not, then some participating agent has an incentive not to participate at
threshold t∗ (k). But we have seen in the proof of Theorem 5.2 that for any fixed level of participation,
equilibrium effort e∗ (t) is increasing in the threshold t for all thresholds t ≤ t∗ (k). From this it follows
that if the principal wants to maximize equilibrium effort subject to the constraint that exactly k
agents participate, and there is no equilibrium in which k agents participate at threshold t∗ (k), then
the maximum equilibrium effort with k participants occurs at the largest threshold less than t∗ (k) at
which there is an equilibrium in which exactly k agents to participate. ¤
37
Proof of Theorem 6.3: To prove this result, we first illustrate that for any fixed level of participation,
k, that the expected utility of the agents in equilibrium is decreasing in the principal’s choice of threshold
for values of the threshold below t∗ , the threshold where equilibrium effort and the threshold are equal.
To see this, recall from equation (9) that the equilibrium level of effort e for a given threshold mechanism
with threshold t and prizes A1 , . . . , Ak is given by the following equation:
Z
k
X
(v(Aj ) − v(Aj+1 ))[Gj (t − e)f (t − e) +
∞
t−e
j=1
dGj (²)
f (²) d²] = c0 (e),
d²
(12)
where Gj (q) denotes the probability that no more than j − 1 of the n − 1 values of ²k are greater than
q. Now the derivative of the left-hand side of this equation with respect to t − e is
k
X
(v(Aj ) − v(Aj+1 ))Gj (t − e)f 0 (t − e),
j=1
which is positive when t − e < 0. Now note that when t < t∗ that both e∗ (t) and t − e∗ (t) are increasing
in t for the following reason: We have already seen in Theorem 5.2 that when t < t∗ that we have both
e∗ (t) > t and that e∗ (t) is increasing in t. Thus to prove this claim, we only need show that t − e∗ (t)
is increasing in t when t < t∗ .
To see this, consider two thresholds t1 and t2 satisfying t1 < t2 < t∗ . Let e∗ (t) denote the equilibrium
level of effort when the mechanism designer uses a threshold t, and suppose by means of contradiction
that t1 −e∗ (t1 ) ≥ t2 −e∗ (t2 ). Since we have seen that the derivative of the left-hand side of equation (12)
is increasing in t − e when t − e < 0, it follows that the left-hand side of equation (12) is greater when
t = t1 and e = e∗ (t1 ) than when t = t2 and e = e∗ (t2 ). But since the right-hand side of equation (12) is
decreasing in e, we also know that the right-hand side of equation (12) is lower when e = e∗ (t1 ) than
when e = e∗ (t2 ). This implies that (12) cannot be simultaneously satisfied at t = t1 and e = e∗ (t1 ) as
well as at t = t2 and e = e∗ (t2 ). This contradicts our assumption that e∗ (t1 ) and e∗ (t2 ) are equilibrium
levels of effort corresponding to the thresholds t1 and t2 , and illustrates that t1 − e∗ (t1 ) ≥ t2 − e∗ (t2 )
cannot hold. Thus t − e∗ (t) is increasing in t when t < t∗ .
But this implies that both the equilibrium level of effort is increasing in t for t ≤ t∗ and the
probability that the agents will meet the threshold in equilibrium is decreasing in t for t ≤ t∗ (since
this probability is decreasing in t − e∗ (t)). Thus the expected utility of the agents is decreasing in the
38
principal’s choice of threshold for thresholds t ≤ t∗ (since the expected value of an agent’s prize is an
increasing function of the agent’s probability of meeting the threshold in equilibrium). Now we know
that if there exists an equilibrium in which exactly k1 agents participate when the threshold is t1 , then
these k1 agents all obtain non-negative utility in equilibrium when the threshold is t1 . And from the
previous result in this paragraph, it follows that if the threshold is some t2 ≤ t1 , then all k1 of these
agents would obtain non-negative utility in equilibrium if exactly k1 agents participated. Thus if it is
not an equilibrium for exactly k1 agents to participate when the threshold is t2 , then it must be the
case that some non-participating agent can profitably deviate by participating when exactly k1 agents
participate. If we then let k2 > k1 denote the largest integer such that no non-participating agent can
profitably deviate by participating when the threshold is t2 and exactly k2 agents participate, then it
is an equilibrium for exactly k2 agents to participate when the threshold is t2 . The result then follows.
¤
39
Download