DATA VALIDATION &

CLEANING PRACTICES

Errors occur in spite of careful study design, conduct, and implementation of error-prevention strategies. Research For Good’s best practices, outlined below, identify, correct and minimize any errors or impact on study results.

Soft Launch Data Check…

… is performed upon collecting 10-15% of the total sample size. It ensures that the survey logic flow is following the questionnaire document. We’re also checking for any abnormalities in the data that might have been caused by a browser that doesn’t support certain software requirements (Java version, etc.)

Checking of Open Ends Data…

… for “gibberish” responses, numbers and any responses that are not meaningful. We clean these records from the final data set

Straight Liner Checks…

… are made across grid questions for “straight liners”. Depending on client’s requirement we could be checking whether 100% or less (say 80%) of the rows in the grid question has been answered with the same scale value

Digital Fingerprint Security Checks…

… of the respondent, such as IP address, browser type and version, screen resolution, etc. to ensure validation of the user

Speedster Checks…

… compare the Length Of Interview median, against each individual LOI and flag the respondents who have completed the survey under the pre-defined minimum time

Trap Questions…

… are used to identify the quality of the answers of the respondents. These can be programmed so that the respondents are flagged immediately upon competition or at the final data check

The Final Data Check…

… is always performed at the end of the fieldwork. It follows the same steps as Soft launch data check