Crowdsourcing is emerging as an alternative outsourcing strategy which is gaining increasing attention in the software
engineering community. However, crowdsourcing software development involves complex tasks which differ significantly from the
micro-tasks that can be found on crowdsourcing platforms such as Amazon Mechanical Turk which are much shorter in duration, are
typically very simple, and do not involve any task interdependencies. To achieve the potential benefits of crowdsourcing in the software
development context, companies need to understand how this strategy works, and what factors might affect crowd participation. We
present a multi-method qualitative and quantitative theory-building research study. Firstly, we derive a set of key concerns from the
crowdsourcing literature as an initial analytical framework for an exploratory case study in a Fortune 500 company. We complement the
case study findings with an analysis of 13,602 crowdsourcing competitions over a ten-year period on the very popular Topcoder
crowdsourcing platform. Drawing from our empirical findings and the crowdsourcing literature, we propose a theoretical model of crowd
interest and actual participation in crowdsourcing competitions. We evaluate this model using Structural Equation Modeling. Among the
findings are that the level of prize and duration of competitions do not significantly increase crowd interest in competitions.