Watch back – #EvaluateDigiHealth: Doing evidence generation within a digital health company

What are the challenges and solutions?

Digital health companies face many challenges when it comes to generating relevant evidence. This webinar discusses these challenges as well as the potential solutions, including whether companies should be introducing a dedicated evidence generation role.

Find out more about the series and sign up for upcoming webinars here.

Chairs:

Dr Paulina Bondaronek, Principal Behavioural Insights Advisor, Department of Health and Social Care
Professor Henry Potts, Professor of Health Informatics, University College London

Contributors:

Dr Claire McCallum, Research and Development at Zinc
Dr Rosie Webster, Science Lead – Venture Builder Programme at Zinc

We’ve summarised the key themes from the discussion:

Evaluation: early-stage vs later-stage

Rosie kicked off the discussion by highlighting the importance of thinking about evaluation as soon as the company or innovator starts to design a product and then embedding evaluation throughout development. User testing should be a continual process and at an early stage, testing should be qualitative and quantitative. Generally, Rosie has found that at an early stage, testing is about identifying the risk of harm as well as building processes for evaluation later down the line. She encourages innovators to run pilots as soon as possible and to test assumptions in real world settings.

When it comes to later-stage companies, Claire explained that this should be when they start to look at the health-related outcomes. This will still be about optimising and experimenting, but the product is more fully formed at this stage so it becomes easier to evaluate and they will have built up a bigger user base to test with.

Challenging the four myths of evaluation in digital health

The panel discussed the following four myths:

  1. Evaluation is always resource-intensive
  2. Evaluation is impossible at an early stage of the company’s development
  3. The outcomes of evaluation can be risky for the company
  4. The purpose of evaluation is just to meet regulatory standards to get your product into the NHS

Claire’s advice was to take a risk-based approach. This includes mapping key assumptions and identifying which are the biggest risks that could hurt the business. Agile forms of experimentation are important, and this is less about performing a Randomized Controlled Trial (RCT) and more about doing small experiments instead. In a start-up company, an ever-changing product can make evaluation and the resources it requires trickier, which is why constant small tests are great for pivoting.

Rosie felt these are all surmountable challenges but that an important decision companies must make from the start is whether they want to be an impact company or whether why want to get engagement to make money. If the former then the company needs to be evaluation-led, measuring outcomes rather than outputs (e.g. number of users).

Impact of evaluation on users

Claire also raised an additional challenge: the burden of doing continual evaluation on the users. It is crucial to try and reduce this and this may mean that instead of doing a 40-question survey a company might reduce the number of questions, prioritise the most important measures and set clear expectations with users.

An ecological momentary assessment (EMA) is an example of how this can be achieved and consists of short questions, in context. This means the user doesn’t have to fill out a long questionnaire and the questions can be made engaging. Using passive data collection is also a good option (e.g. collecting behavioural outcomes, sensor based outcomes) as this reduces the impact on the users.

Embedding evaluation

The theory of change is vital for embedding evaluation. Claire encouraged companies to be practical and to make a diagram or a map for evaluation. Embed evaluation into everything you do, measure, and continuously modify. Think about measures for both short-term and longer-term outcomes.

Rosie built on this by suggesting companies think about the minimum confidence they need in their innovation and then decide on an approach to evaluate. She also feels it is important to take time to prioritise UX research as it is worth the time saved overall. Think about the processes for experimentation such as sprints and AB testing and use the strategiser test card – a tool which can help structure thinking

It was also agreed that hiring good people for your team who know about robust testing is an excellent option for ensuring effective evaluation. Additionally, it may be useful for companies to find an academic partner for research.

Rosie felt that often companies see formative and summative assessments as separate things; they are either doing development or evaluation – but the two things go in hand in hand. Both panellists agreed that it is less about completing one phase and then moving on to the other, and more about what should be tested right now. Evaluation can sometimes feel messy.

The ethical side of testing

In this form of evaluation, as opposed to academic research, it is less about having to go through the red tape but rather about behaving ethically. Claire flagged that it is important to try to think about ethical principles if working with NHS patients and this will need to go through NHS ethics but otherwise, simply try and have ethical foundation when completing evaluations. At Zinc they encourage innovators to go through an informed consent process, making the risks clear to patients.

There is an HRA tool which companies can go through to see if their evaluation is considered as research and need ethics approval.

Other recommended tools for innovators included: