5C.1 NSSL’S PROTOTYPE ENHANCED SEVERE THUNDERSTORM DATABASE Kevin Scharfenberg*, Travis Smith, Carey Legett, Kevin Manross, Kiel Ortega, and Angelyn Kolodziej Cooperative Institute for Mesoscale Meteorological Studies, The University of Oklahoma, and NOAA/National Severe Storms Laboratory, Norman, Oklahoma 1. INTRODUCTION With the transition to "storm-based" warnings in the United States National Weather Service (NWS) (Ferree et al. 2006), severe local storm forecasts are being issued at increasing temporal and spatial precision. The imminent introduction of new model guidance, new observation platforms, and enhanced applications will allow forecasters to further increase their precision and to introduce uncertainty information (i.e., probabilities) to their severe weather warnings. Verification techniques and the associated severe thunderstorm database, however, remain relatively low- resolution. The existing database also remains largely analog and text-based, despite the increased availability of multimedia and digital resources. To address this emerging gap between the validation data set and modern warning techniques and applications, the National Severe Storms Laboratory (NSSL) conducted the Severe Hazards Analysis and Verification Experiment (SHAVE) in 2006-2007. Through the use of emerging internet applications, SHAVE-like data sets may be added to multimedia resources (e.g., photographs of damage or video of the severe weather) and Geographic Information Systems (GIS) data sets to form a comprehensive digital database of a severe weather event at much higher resolution than previously available. The NSSL is currently developing a prototype internet-based collaborative portal to manage and host this information. This paper describes the best methods for creating a dense verification data set as discovered during the SHAVE project, and describes the effort at NSSL to create a database to store such enhanced data sets. * Corresponding author address: Kevin Scharfenberg, NSSL/WRDD, 120 David L. Boren Blvd., Norman, OK, 73072. 2. INCREASING REPORT DENSITY: LESSONS LEARNED DURING SHAVE 2006-2007 In order to verify success, efforts to increase severe local storm warning temporal and spatial precision must be concurrent with an increase in density (both temporal and spatial) of the validation data set. The SHAVE experiment was conducted in 2006-2007 to address the need for increased verification data density (Smith et al. 2007). The SHAVE project found a number of methods to increase density of hail and wind reports, enough to analyze “swaths” of hail and wind damage. The most important principle is to have multiple methods for locating phone numbers in sparsely populated areas. SHAVE used rural county telephone directories, and geo-located databases of residences, businesses, and property tax-payer records. After identifying a storm of interest, SHAVE students found nearby phone numbers and monitored the nearest radar, calling the phone numbers immediately after passage of the thunderstorm. This required the real-time overlay of radar information on top of mapping software. Reports of hail occurrence and size were easier to determine using phone calls than information about wind events, according to the SHAVE data collection team. This is because the general public tends to express less confidence in their wind gust estimates, and communication disruptions frequently occurred in the vicinity of major wind events. Students collecting data during SHAVE found that rural residents were more likely to be observant of severe weather conditions (i.e., time and magnitude of the event) than people in urban areas. The robustness of the SHAVE hail data set is described in Ortega et al. (2006). Equally dense datasets for damaging wind and tornado events were attempted during the 2007 collection period. The SHAVE team found that field surveys are the most effective in compiling a complete wind