Skip to main content

Table 2 Main categories based on the constructs in the Innovation domain in the Consolidated Framework for Implementation Research, with subcategories and illustrative quotes

From: Innovation in healthcare: leadership perceptions about the innovation characteristics of artificial intelligence—a qualitative interview study with healthcare leaders in Sweden

Main categories and subcategories

Illustrative quotes

Innovation relative advantage

• Decision support for managers/leaders

• Decision support for healthcare professionals

• Better health outcomes for patients

• Early detection of disease

• Social impetus

“…understand our activities in a better way, so that you can make wise decisions… In part you can understand the medical development… and you can get a better sense of financial connections and relationships… when you can build together the activities we conduct… a bit tighter than what we have been able to do before” (2)

“I want to know if it’s a clot somewhere or if it’s tied to some form of cancer, those are the answers I want as a treating physician… I think that an AI solution might become a form of support for decision-making” (1)

“From a patient perspective… I think mostly… perhaps things like quality, well for the patient, both that it improves but also that we get faster, quicker assessments” (9)

“That’s what I envision with AI. In the clinical work, AI has an amazing ability to assemble a large amount of information and see patterns in it” (10)

“We are currently together with the university and region, tying up the big sharks to get them to join us and finance and develop things but also to get more companies from (our region, authors comment) This an industry for the future, … this is it” (8)

Innovation source

• Development of AI internally through local strategic collaborations

• Limited quality and safety awareness in smaller tech companies

• Difficulties with external networking around AI

“We have all of the data in place, we have this system development department that can build things, we have a lot of knowledge in this house and a brave region and we are looking ahead and … we have the opportunity to prioritise things, …so I think it’s fully possible to build at the present time” (5)

“Certain parts of the Medical Technology industry aren't used to critical thinking and scientific models, which is a requirement in healthcare… It’s the work that’s the most important part and the tricky part. I guess that’s the most time-consuming part too. A lot of Medical Technology companies also feel extremely frustrated about this. Because a lot of them feel that they have finished solutions. “You just have to get started, look you can save money or save lives” or whatever. Yeah great, and then I go over it, because I've made my own little check-list of things to investigate. Have you thought about this and that? Okay, what did you do when you were validating?” (2)

“Maybe we shouldn’t talk about dangers or difficulties, but naturally we’re facing a challenge with current legislation being what it is. We’re noticing that even at this early stage. We’re being extremely cautious when it comes to selecting data. We have a great deal of respect for anything that’s individual. We can’t pick out just anything, in short, that’s how it is. That’s how it is and nor can we just pick out anything when it comes to private healthcare providers and compare and so on. We have a great legislation to adapt to as well and I think legislators need to review that as well and adapt” (6)

“All regions are currently doing a great job just to achieve structure, that you can view things in a sensible way. It’s going to be a lot simpler in the future too, because you will maintain standards in an entirely different way. The result of this is that what I do, analysis and other things, becomes much easier. And then we have the standards that one will hopefully stick to “(10)

Innovation evidence-base

• Uncertainty around opaque evidence-base

• New understanding of evidence

• Risks of biases feeding into the technology

“You can currently go down to the library and start digging through research reports. It takes a few hours, weeks, but you still work your own way towards that understanding, so to speak. And then with that amount… we’re still not going to find all of the research reports in this area of course, but I still feel that I can stand behind this. I’ve studied this, I trust this, it’s my assessment. I will also be taking responsibility if it doesn’t turn out that great” (7)

“Then it’s also a bit unclear to me… where you can find science and proven experience… how is AI going to affect knowledge management? That parallel, I don’t quite understand it yet, because we are after all working on an evidence basis so it’s not like just ‘well I think we should do this because it seems…’ I mean, we don’t normally work like that in Swedish healthcare, when we know that we can produce evidence, and I’m still having a bit of a hard time seeing how those things will affect each other… we’re very used to having a lot of things to back up our decision and I think we can stay there.” (9)

“There are so many things that can go wrong, if you look at AI specifically. Most things that are digital are thereby copyable. So along with the implementation of it, both advantages and disadvantages or risks are amplified. So if you have a built-in error, there consequence becomes massive. Since it is used in to such a high frequency. Those things are incredibly important to build a vaccination against, in approving process and things like that.” (2)

Innovation adaptability

• AI will fit more naturally in some clinical contexts than in others

“I think that the conditions are very different… a general answer becomes way too hard… I think there’s a very good chance in diagnostics to achieve relatively fast implementation of AI, for example to help examine X-ray images, CTs or MRIs, where you do a fluoroscopy of soft parts in the body, it’s not as sensitive in terms of an individual’s privacy since you don’t know who is who. A lung looks like a lung and you can’t know who the person is by looking at the lung” (1)

Innovation trialability

• Uncertainty about where to test AI in the organisation

“I think is going to need two organisations, just like we have now because they vary so much in nature. This data storage progress… needs to be very quality assured, it has to be very secure, it’ll be running every night, we don’t want disturbances in the system. This green section is a bit more exploratory… they’re a little bit different because… what’s important is to quickly be able to switch sources, switch ideas, switch development method… it’s a lot more exploratory” (5)

Innovation design

• Need for external expertise to design the AI applications

• Healthcare professionals currently have little knowledge about AI

“There needs to be some form of product from it in order for us to be able to use it. I think we have a challenge there, I would say, to create a product from this it probably has to be a company… it won’t work at all to put a click view in the hands of a clinician” (7)

“One doctor out of all of the doctors I have met… wanted the routines printed out and placed on his desk. One single doctor out of all the ones I met with. Everyone else wanted it digitally and to be able to access it quickly and easily… it can’t be too complicated and it can be time consuming, because then it will lead to nothing” (4)

“You kind of get the sense of the “beautiful new world”, something along the lines of that, but in reality it’s actually just a decent, advanced statistic and probability theory” (3)

Innovation complexity

• Uncertainty about what AI is and is not

• Lack of guidance for decisions about AI deployment in the organisation

• Expectations of change resistance

• Expectations of AI-scepticism and lack of trust

“In principle you have to press a button and you generate an answer and they can’t malfunction, but somewhere you still need to have a explanatory background involving the complexity. You need to very honest about that: these are the parameters that primarily form the basis of these AI decisions. If you haven’t included all of the parts you can’t mention them. Certain parts may not even be possible to add” (1)

“You also understand what it entails to have an organised insertion, the procurement department, system management, knowledge department, digitalisation department… and then of course economy and communication, you know they are really complex systems… sometimes I’m a bit concerned about people who really don’t get it. These are really important discussions that need to be held with administration management so that you have a consensus. If you’re going to invest in areas where you know that you’re making very subjective assessments or where you’re where you have really high flows?… I mean we normally have really high flows in some cases, or do you invest where you have very small flows? There is an infinite number of perspectives, I just think that when it comes to issues like this it’s very important to think carefully so that you can motivate your reasoning” (9)

“It’s unavoidable… that the every day routine for our employees will change …One of the even bigger challenges in addition to us needing to readjust is that we’re going to stop doing things. You’re going to stop doing things because they’re not creating value, and instead you’re going to do this. Here it’s not about the resistance to these services or these technical solutions, it’s that we want to continue with the old” (14)

“If the business hasn’t said that there is a need and you say that this will improve things, there’s not a lot of motivation and benefit there I think” (1)

“I think that it’s that you trust so much in yourself in your profession, occupational role, that I think you have a hard time allowing some other type of machine or data or something else to make that assessment somehow, you want to… you don’t quite trust it” (12)

“If you’re going to build trust you need to know that what you’re working with actually provides you with that” (4)

“The trust issue is important and most especially that you would be giving, make errors. What we talked about, that you get locked too soon. You miss something, miss something serious. At least when you’re working with health and healthcare that’s the most serious bit, I would say. If it happens we lose trust right away. That’s enough, and the issue of responsibility is a really difficult one. So I go on what Doctor AI recommended, and where is the burden of proof? Maybe I took a quick peek at these suggestions and felt that it sounded pretty good as soon on. I chose to use them, but can I blame Doctor AI?” (1)

Innovation cost

• No state-allocated resources for implementation and roll-out of AI

• Uncertainty about the level of costs involved in the future larger-scale implementation of AI

“You need to be able to allocate resources, time. You also need to finance it. We often forget that and think that we can manage it with the existing budget, but no you can’t. You need to allocate resources and money in order to succeed” (04)

“Here it’s about prioritising the things that will benefit us the most, in some way. And then of course at the same time we need to have a high degree of development. We need to have some wise decision-makers here that can take part and still make some oriented decisions on what you need. Because it’s not cheap either, I can’t. It’s going to require a fair amount of time and quite a lot of development and things like that” (6)

“AI will come and will most likely be a degree of priority in politics and for higher officials. The consequence will probably be that we won’t do it all. It’s quite likely that there will actually be… consequences for other things that also need to be done and followed up on… somewhere you’re going to need to make the cut too. They will become tough priority decisions to make, but somewhere we still have a line structure, we have politics, higher officials, where [the decisions] need to be made, and they become a matter of priority, where the resources will be spent. I’m doubtful that we’re going to get more resources” (7)