Although only about 50 years old, the software domain has already endured one well documented crisis, which was identified early in its evolution in the 1960s. Simply summarised, the initial software crisis – Software Crisis 1.0 as it is termed here – referred to the fact that software took longer to develop than estimated, cost more to develop than estimated, and did not work very well when eventually delivered. Over the years many studies have confirmed the Software Crisis 1.0. Nevertheless, software has been one of the key success stories of the past 50 years and has truly revolutionised modern life. However, over the past 50 years, there have also been enormous advances in hardware capability – dramatic reductions in hardware costs allied to dramatic increases in processing power and proliferation of devices; almost infinite amounts of data are now available through ubiquitous sensors and through applications such as Google. Complementing these ‘push’ factors, there is a significant ‘pull’ factor arising through the emergence of ‘digital native’ technology consumers who have never known life without technology. The opportunities for individuals, business and society afforded by the advances in hardware technology and the vast amounts of data potentially available, when allied to the insatiable appetite of digital natives, are truly enormous. Unfortunately there have not been similar advances in relation to our software development capability, and thus the critical limiting factor in realising the potential of the advances mentioned above is again software – the Software Crisis 2.0 as I label it here. Individual efforts are seeking to address this crisis – data analytics, parallel processing, new development methods, cloud services – but these are disjointed, and not likely to deliver the software development capacity needed.