Forecasting using judgment is common in practice. From project planning to investing in a house, we routinely make important decisions based on how we expect the future to unfold. In much the same way, governments and businesses rely on forecasts related to political and economic events in order to appropriately plan for future challenges and opportunities. The cybersecurity industry is no different, and each year many security organizations release ``annual predictions'' of anticipated cybersecurity issues. The premise behind such predictions is that experts are in a superior position due to their knowledge and experience to anticipate future cybersecurity-related events. However, this premise is untested and motivates this research.
This master's thesis examines the primary question of whether security professionals' predictions about future cybersecurity-related questions are systematically distinct from those of other Information Technology professionals, who lack the same specialized experience. In particular, this research examines the results of 20 security-related surveys. Each survey takes a different approach to the category and format of the question in order to analyze a variety of types of forecasting questions. Using a combination of measurement techniques, I determine if there exist any significant patterns among security professionals and non-security professionals. Additionally, I analyze two cohorts within security professionals to determine if there are measurable differences between them with regard to judgment forecasts. Through this study, I concluded that I cannot support the claim that security professionals offer a distinct judgment over non-security experts.