Skip to main content

Table 1 Data sources, variables and indicators details

From: Evaluating change in a pressured healthcare system: a cross-sectional study of implementation outcomes using routine data indicators and proxies

Data source

Variable

Description

Device collected data: details for every recorded episode of care using the device.

Project ID

Each project was named in the dataset.

Site ID

Each site (NHS Trust) was named in the dataset. Some sites hosted multiple projects, hence the need for a project identifier as well as a site identifier.

Device type

Home (kept in patient’s home, controlled by the patient) or Pro device (held and controlled by the health care professional).

Type of care episode/device usage

Direct online consultation; off line examination by patient (Home device) sent for offline review; off line examination by health care professional (Pro device) sent for offline review. This was used for the costing analysis only.

Demonstration

Whether the entry was a demonstration during testing/set-up or a genuine care episode. Demonstrations were removed during data cleaning.

Duration of contact

Length of care episode, in minutes. This was used for the costing analysis only.

Type of clinician

Health care assistant; GP practice nurse; band 7 hospital nurse; GP; speciality registrar; Consultant (medic). This was used for the costing analysis only.

Pseudonymised clinician identifier

Clinicians could appear more than once in the dataset; this variable enabled us to count the number of different clinicians using the devices without identifying who they were.

Examination (heart, or heart rate, lung, skin, throat, ear, temperature)

Type of examination performed (but not necessarily using the device as clinicians could perform an examination using their own equipment and then enter this into the record).

Examination ordering

Patients could have multiple examinations per care episode and so separate variables were created for 1st, 2nd, 3rd, n examinations to enable us to capture each examination separately, which was easier for analysis.

Whether examination performed using device or not.

For each examination above, we created a separate variable identifying whether it was done using the device or not, which was indicated in the dataset via a variable titled ‘counter’, dummy coded as ‘1’ for yes (using device) and ‘2’ for no (‘not using device’). E.g. it was possible for clinicians to use their own equipment and enter the data in to the patients’ record.

Real-time pop-up questions built into the device. These covered:

(1) Ratings of audio and visual quality

(2) Assessment as to whether the contact avoided a face to face appointment (yes/no)

(3) Whether the examination had to be repeated due to poor quality.

#1 was via a 5-point Likert scale, #2 and #3 via a ‘yes/no’ response.

Qualitative interviews (not covered in this paper)

 

Perceptions of acceptability and appropriateness.

Perceptions of sustainability.

Perceptions of equality/inequalities impact.

Administrative data.

Project details

Care setting; specified device usage (e.g. ‘to remotely monitor paediatric patients following discharge’).

Number of health care professionals trained

Collected at project level.

Number of patients trained to use the Home device (where applicable)

For patients, training was only necessary for those needing to use the Home device.

Sites and projects progress:

• Signed up but not live (failed to launch).

• Live.

• Signed up to pilot extension.

In the analysis, signed up but not live were considered sites that had expressed an interest but had failed to launch, given that the download happened after the pilot end.

Device licencing (numbers and cost via procurement prices) and device uses.