On Device Research Ltd

ESOMAR 28

  1. What experience does your company have in providing online samples for market research?
    1. On Device Research Limited is a research company that has been operating since July 2010, providing end-to-end service from questionnaire design and sampling to analysis and reporting. We have operated with a particular focus on reaching consumers via mobile devices and have conducted online fieldwork in over 90 countries.
  2. Please describe and explain the type(s) of online sample sources from which you get respondents. Are these databases? Actively managed research panels? Direct marketing lists? Social networks? Web intercept (also known as river) samples?
    1. We source respondents from both our actively managed panels as well as from dynamic, or 'river,' sampling, where recruitment occurs via ad banners and videos on various websites and apps.
    2. All members of our research panels have downloaded our 'Curious Cat App' to their mobile devices. The 'Curious Cat App' allows us to serve our own surveys as 'tasks' to be completed in a browser, but also operates as a supply side platform for surveys from other 'exchanges'.
    3. Where dynamic sampling is employed, respondents at the end of the survey are invited to join our research panels.
  3. If you provide samples from more than one source: How are the different sample sources blended together to ensure validity? How can this be replicated over time to provide reliability? How do you deal with the possibility of duplication of respondents across sources?
    1. We may blend sample sourced from our panels with that which is dynamically sourced in order to meet particular quota specifications.
    2. Where a study comprises multiple waves and any blending is employed, we ensure consistent sample compositions across all waves as much as possible, as part of the sample design and planning throughout the life of a project.
    3. We implement proprietary controls to identify potential duplication between sources, as well as widely accepted industry-specific technologies to gather numerous data points from a respondent's device which are put through deterministic algorithms to create a digital fingerprint.
  4. Are your sample source(s) used solely for market research? If not, what other purposes are they used for?
    1. panelists are recruited exclusively for Market Research purposes. We never share individual details with any other party.
  5. How do you source groups that may be hard-to-reach on the internet?
    1. As we primarily ask respondents to complete surveys on their mobile device, the potential to reach audiences is significantly greater than those panels and sources historically built for survey completion on desktop/laptop computers, for instance. This is because consumers are increasingly less likely to possess larger/non-mobile devices. This is especially pertinent for certain markets and demographic groups.
    2. As we are market leaders in rendering surveys with the lightest page load on mobile browsers, publishers and ad tech companies are willing to work with us when recruiting panelists and respondents, in a way that they may not be for less mobile-centric research companies.
    3. Furthermore, we may blend sample sourced from our panels with that which is dynamically sourced.
  6. If, on a particular project, you need to supplement your sample(s) with sample(s) from other providers, how do you select those partners? Is it your policy to notify a client in advance when using a third party provider?
    1. The size, responsiveness and representativeness of our research panels means that there is hardly ever a requirement to employ external sample providers.
    2. External sample providers would be selected based on their reach capability, the client specifications and budget.
    3. Any external sample providers are carefully vetted and approved beforehand, but also in any given project, the quality of answers in terms of speeding, straightlining and openends is closely monitored.
    4. It is typical that the client would be notified in the spirit of transparency, and pragmatically, as budgets may be affected.
  7. What steps do you take to achieve a representative sample of the target population?
    1. The user acquisition strategy employed to build our 'Curious Cat App' panels is very much focused on a broad range of recruitment campaigns to attain the most diverse demographic and behavioural representation possible.
    2. This requirement for representation can be further supported in the design of any supplementary dynamic sampling strategy.
  8. Do you employ a survey router?
    1. No.
  9. If you use a router: Please describe the allocation process within your router. How do you decide which surveys might be considered for a respondent? On what priority basis are respondents allocated to surveys?
    1. N/A
  10. If you use a router: What measures do you take to guard against, or mitigate, any bias arising from employing a router? How do you measure and report any bias?
    1. N/A
  11. If you use a router: Who in your company sets the parameters of the router? Is it a dedicated team or individual project managers?
    1. N/A
  12. What profiling data is held on respondents? How is it done? How does this differ across sample sources? How is it kept up-to-date? If no relevant profiling data is held, how are low incidence projects dealt with?
    1. Upon registering, panelists are invited to tell us more about themselves in a 'profiler' survey, in order to assist with selecting them for future surveys that are relevant to their personal profile. Included are questions on age, gender, region, education level,employment status, occupation, family composition, role and influence in purchasing decisions.
    2. To increase survey opportunities relevant to their interests and lifestyle, panelists are encouraged to answer questions on a range of topics, including their hobbies, entertainment interests, car ownership and preferences, technology ownership etc.
    3. Panelists are encouraged to keep their profile data up to date in order to keep sample feasibility and selection to an optimum.
  13. Please describe your survey invitation process. What is the proposition that people are offered to take part in individual surveys? What information about the project itself is given in the process? Apart from direct invitations to specific surveys (or to a router), what other means of invitation to surveys are respondents exposed to? You should note that not all invitations to participate take the form of emails.
    1. Panelists are notified in the 'Curious Cat App' installed on their mobile device, that a survey 'task' is available for them to complete. The initial view of the task shows the number of reward points available for its completion. Once the task is clicked, the opening page of the survey often informs the panelist of the topic of the survey and the expected average duration to complete it.
  14. Please describe the incentives that respondents are offered for taking part in your surveys. How does this differ by sample source, by interview length, by respondent characteristics?
    1. Respondents are incentivised on points, which represent a cash reward which can be redeemed via PayPal, upon reaching a redemption level set by us.
    2. The number of points is typically driven by the complexity and expected average duration to complete the survey.
    3. The exact number of reward points is clearly stated before the survey is started.
  15. What information about a project do you need in order to give an accurate estimate of feasibility using your own resources?
    1. sample size
    2. respondent profile - demographic/behavioural
    3. incidence rate / screening criteria
    4. quota targets / representation
    5. expected average duration to complete the survey
    6. category / subject matter
  16. Do you measure respondent satisfaction? Is this information made available to clients?
    1. We measure respondent satisfaction in at least three different ways:

      1. Flagged tasks: Every user has the opportunity to provide their feedback by flagging surveys for review and sharing their comments through the app. We also use this data to help clients (Survey Suppliers) understand the performance of their projects and to help identify any technical issues.
      2. Survey Conversion: We measure survey conversion rates, and automatically deactivate tasks that are low converting. This information is not directly available to clients, but we’re happy to help with any requests of this type.
      3. Panel Feedback Survey: We ask members to provide feedback about the overall app experience. This data focuses particularly on the experience of being a user and is not directly relevant for clients. Internally, this allows us to understand overall satisfaction and identify opportunities for improvement.
    2. We also communicate with our panelists through our help desk and Social Media channels to resolve any issues.
  17. What information do you provide to debrief your client after the project has finished?
    1. When clients require such feedback after completion of a project, information can be provided that relate to actual incidence rate, the number of responses, their outcome status (starts, completes, screen-outs, quota-outs) and any marked causes of dropoff that may be apparent.
  18. Who is responsible for data quality checks? If it is you, do you have in place procedures to reduce or eliminate undesired within survey behaviours, such as (a) random responding, (b) Illogical or inconsistent responding, (c) overuse of item non-response (e.g. “Don’t Know”) or (d) speeding (too rapid survey completion)? Please describe these procedures.
    1. The quality of the data collected from any of the surveys that we host is our responsibility. The main points of quality that we observe are:

      1. Contradictions. It is standard in all our surveys to ask for age and year of birth in different places in the survey, and if these do not reconcile, the survey response is flagged and removed from the data to be delivered. But we also flag any other nonsensical or contradictory answers that emanate from the survey design (e.g. a nineteen year old with a 15 year old child, etc).
      2. Speeding is checked in post-processing, whereby we look at the middle 8 deciles (middle 80% of the sample) of the completed surveys, from which we calculate a mean (this is to leave out outliers at the extremes, just in the mean calculation). If a survey is completed in less than a certain fraction of this average, then we'll remove the response from the data.

        A number of factors may come into play:

        • routing
        • device type (i.e. some types of devices will be slower than others)
        • connection speed (varies by market and networks)
        • the market in which the survey is fielded (which will have certain connection speeds available, and a certain mix of devices)
      3. Straight-lining. The propensity to give consecutive same-answer responses more frequently than might be considered 'believable,' among consecutive scale questions and other non-randomised, non-demographic questions. Although we flag this, we also encourage clients to pertain to certain survey design standards such as avoiding large batteries of scale or drop-down questions, as well as offering a question type in our proprietary survey authoring platforms that work as a good alternative to grids.
      4. Poor Openend Answers. We flag any nonsensical answers (e.g. 'asdf'), non-contextual answers (e.g. brand answers that bear no relation to the category being asked of), and we flag against a 'black list' of swear words in various languages.
    2. If we find that a panelist is persisting in providing such poor quality data, they are no longer invited to surveys.
  19. How often can the same individual be contacted to take part in a survey within a specified period whether they respond to the contact or not? How does this vary across your sample sources?
    1. We monitor the participation and responsiveness of all panel members. We hold data for each panelist on which surveys they have already taken part in; hence, a panelist can easily be included or excluded in a subsequent survey with reference to this. We deliberately use this mechanism to ensure that panelists are not invited to too many surveys within a given period, but also to ensure there is sufficient time between surveys of a similar topic or brand. Typically, we allow for one invitation to a survey task, and occasionally a further reminder notification.
  20. How often can the same individual take part in a survey within a specified period? How does this vary across your sample sources? How do you manage this within categories and/or time periods?
    1. Exclusion periods are set on a per-project basis, whilst closely monitoring the volume and topic of surveys each panelist partakes in within a given period. The default exclusion period is 3 months minimum for longitudinal/tracker/multi-wave studies, for each respondent taking part. Additionally we implement a quarantine period for any respondents displaying inappropriate behaviour which limits the functionality of the app and its ability to pay out.
  21. Do you maintain individual level data such as recent participation history, date of entry, source, etc., on your survey respondents? Are you able to supply your client with a project analysis of such individual level data?
    1. We maintain all survey participation information at an individual panelist level. This data includes: date of joining panel, last participation date, transaction history, reward points transactions, and source of panelist.
    2. This data informs quality assurance, and if required, can be reported to clients in relation to their projects in an anonymised and/or aggregated fashion. This data is deleted for any panelist whose membership is relinquished.
  22. Do you have a confirmation of respondent identity procedure? Do you have procedures to detect fraudulent respondents? Please describe these procedures as they are implemented at sample source registration and/or at the point of entry to a survey or router. If you offer B2B samples what are the procedures there, if any?
    1. We have a number of data integrity mechanisms to verify that panelists/respondents are real and are who they say they are, and to ensure that we deliver good quality survey results which demonstrate attentiveness.
    2. The quality of survey data is scored for every survey during data processing, with regard to speeding, straightlining, providing nonsensical/offensive openend answers, and attempting re-entry once screened out. This helps us to eliminate bad data from delivered survey results, but also allows us to record, flag and block panelists from future survey participation.
    3. Anti-fraud detection systems are in place at the point of panel registration, which detect and flag suspicious activity, and as we always ask certain demographic questions, by default, in every survey, profile data can be checked against survey responses.
  23. Please describe the ‘opt-in for market research’ processes for all your online sample sources.
    1. The panel membership registration process is only completed following:

      • acceptance of our terms and conditions
      • answering of key profiling questions
      • confirmation of consent to join the panel
      • a double opt-in verification of the registrant's mobile device using a validation code
  24. Please provide a link to your Privacy Policy. How is your Privacy Policy provided to your respondents?
    1. Privacy Policy: https://ondeviceresearch.com/privacyPolicy

      This can be found from the 'Curious Cat App' or from this page on our website: https://ondeviceresearch.com/gdpr

  25. Please describe the measures you take to ensure data protection and data security.
    1. OnDevice Research implements accepted information security best practices. We maintain a written Information Security Policy, which we can share with anyone who requires it. We have written incident response policies, processes, and procedures. We have an Information Security Officer who leads this function as part of their role.
    2. We ensure data protection by design and by default, by pseudonymisation and data minimisation. By default, any personally identifiable data is purged after 3 months, according to aforementioned policy. Secure survey links are deployed.
    3. We use Amazon Web Services (AWS) for data storage. This has network security controls associated with it, including AWS VPC (Virtual Private Cloud) for network segregation, and AWS WAF (Web Application Firewalls).
    4. The architecture used to process data and information is made up of the following: load balancer, app server, MessageBroker, Queue, Dynamo, S3, Spark. This includes the scope of our Data Loss Prevention, as per Dynamo and S3, application servers run in triplicate, and RDS run in failover pair and backed redundant disks. Data is backed up as disk snapshots and exports to S3.
    5. We maintain role-based access, with minimisation of admin rights.
    6. We perform security reviews as a part of code reviews. We use the GitHub platform and our codebase is secured hence.
    7. On an annual basis, we:
      • review our Information Security Policy
      • perform a security risk assessment that addresses the confidentiality, integrity, and availability of data and information
      • perform an internal information security audit that covers the totality of systems, facilities, and personnel that process or store data and information
  26. What practices do you follow to decide whether online research should be used to present commercially sensitive client data or materials to survey respondents?
    1. We perform a quality check on every survey before it is made available to panelists.
    2. Surveys are checked for whether or not there are any personal, sensitive or prohibited questions being asked. Surveys are also checked for appropriate language, survey logic and expected average duration to complete.
    3. If a survey contains any sensitive material or content, Non-Disclosure Agreements (NDAs) can be exercised with clients and panelists.
  27. Are you certified to any specific quality system? If so, which one(s)?
    1. All our research-related technologies are proprietary: our survey authoring and data collection platform, our panel and sampling technologies, our digital ad exposure collection system, and the infrastructure that binds them together. Hence our own quality management systems are specific to these technologies in a meaningful way, and are built on the principles of effective data storage, security and management.
  28. Do you conduct online surveys with children and young people? If so, do you adhere to the standards that ESOMAR provides? What other rules or standards, for example COPPA in the United States, do you comply with?
    1. On Device Research does not survey anyone under the age of 16. Upon registering to join our research panel, members are asked for their age details as part of their profile data, and again, in every survey by default, each respondent is asked for their age and year of birth at different points in the survey. Any contradictions between the profile and survey data can be flagged, and any contradiction between age and birth year within each survey, is flagged and the entire record removed as part of the data cleaning process for every project.