spot_img
13.1 C
London
HomeLatest PostsHow Athenahealth moved from traditional AI to genAI and ChatGPT

How Athenahealth moved from traditional AI to genAI and ChatGPT

Athenahealth provides software and services for medical groups and health systems around the country, and finding efficiencies through the use of artificial intelligence (AI) has long been a part of its DNA, so to speak.

For example, the healthcare technology provider has already been using machine learning to sift through tens of millions of faxes it receives electronically each year so they can be attached to the proper patient record.

But, the company’s use of AI changed dramatically when little more than a year ago OpenAI announced ChatGPT. Athenahealth recognized the generative AI (genAI) platform’s promise of creating new efficiencies, both for clients and its internal processes.

Earlier this month, Athenahealth unveiled a range of new generative AI-driven capabilities across its product line, including Athenahealth’s cloud-based suite of electronic health records (EHR), revenue cycle management, and patient engagement tools.

One newly deployed genAI capability can summarize the labels on patient healthcare documents intelligently so providers can more easily find the information most relevant at the point of care. Another feature will identify missing or incorrect information before a prior authorization for care is submitted to maximize the chance the authorization will be approved.

Heather Lane, Athenahealth’s senior architect of data science and platform engineering, has technical oversight of the company’s AI strategy and oversaw not only genAI product deployments but the creation of a team that continues to explore new ways of using the tech.

heather lane

Athenahealth

Lane spoke with Computerworld about how genAI such as ChatGPT through Microsoft’s Azure platform has been deployed and what the organization hopes to gain in coming years. The following are excerpts from that interview:

Is generative AI as promising as many claims? “I think the discussion in the industry is between the people who believe it’s an ‘iPhone moment’ and the people who believe it’s hype. I think it remains to be seen who’s right. I’m betting on an iPhone moment.”

How did you create an AI team to address the rollout of the technology and who did that consist of? “We have a data science team and we’re gradually, broadly calling it the AI team. We’re not the only ones at Athena who do AI, but we are the majority team that does machine learning and artificial intelligence. The team has been around about a year and a half now.”

What sort of things did your team do to learn AI skills? Did you educate employees on AI internally or hire talent to address an AI skills shortage? “We have mostly taught. The effort to level up in generative AI goes well beyond just the AI team. We took on a significant internal education activity this year. We called it a codefest, the next step up from a hackathon. And we framed it around … three use cases … that are just going to alpha now. The objective was to get those three cases to early deployment and educate a bunch of engineers on how this technology works, and not just engineers but our product people, our [user experience] people, and so on. They all have to have some understanding of this technology.

“Finally, we needed to build some institutional understanding of this technology of where the costs and benefits are and strengths and weaknesses are — also the legal and regulatory and safety and security issues that need to be considered.

“We had these multiple goals and went into it pretty heavily organizationally. Along with the three use cases we were targeting for alpha deployment, we also had 10 other projects that were exploratory level and about 40 that we reviewed at an internal board level, but didn’t launch into exploring.

I think the discussion in the industry is between the people who believe it’s an iPhone moment and the people who believe it’s hype. I think it remains to be seen who’s right.

“We had about 300 developers going through a generative AI boot camp. We logged over 2,200 hours of generative AI training time internally. We had externally run knowledge sessions where we invited speakers from organizations like Microsoft and OpenAI. We logged over 700 attendees. We logged 167 employees getting hands-on with internal, secure, data-compliant versions of ChatGPT. We produced over 100 pages of documentation and on the order of 10,000 lines of code. So there was quite a bit of work and a serious organizational commitment going into learning about this.”

You created 10,000 lines of code. For what purpose? “There was a certain amount of infrastructure investment we had to do underneath the hood to support all of them. All of those needed code to enable a generative AI capability. OpenAI will rent you an API — or in this case, we rented through Microsoft — to get data privacy. But it’s OpenAI’s machinery rented through Microsoft… That’s just an API,  that’s not a feature. You need layers of software to go from the API to the feature, so that’s where those thousands of lines of code came in.”

How has AI assisted you internally and how has it assisted your clients? “The two end up coupling together, because we have several workflows we do on behalf of our providers that is part of our value prop to them. But in turn, if we can automate parts of them it turns into internal savings and efficiencies.

“For example, the fax processing. The healthcare system still runs dramatically on faxes. It’s kind of frightening but true. Athenahealth receives in the vicinity of 160 million to 170 million faxes on behalf of our clinical providers. The volume keeps increasing, and those are just the inbound ones. Somebody has to deal with those. They [the electronic faxes] have to get attached to patient charts. We have to know who the right patient is to go to. What is the paperwork about? All that has to be done before that information becomes even moderately useful to physicians.

“Now, if Athena was not doing that work, physicians would be doing it. So, Athena is doing that work on behalf of physicians. Historically, we did that through outsourcing and human effort, but at the scale of documentation we’re talking about, even if you outsource it, that becomes a sizeable expense.

“So, beginning seven-and-a-half years ago, our data science team began building out a natural language processing system that could do a lot of the information extraction from those fax documents and do a lot of that automated filing without human intervention. We use machine learning to build natural language process capabilities that can read the faxes and extract the information we need from them.”

So, you were doing this well before ChatGPT — using natural language processing? “AI has been around a while. You can trace its origins to at least 1950 with the paper by Turing. It’s been a pretty rich field for at least the last quarter of the 20th century and it’s only grown in prominence since then with the advent of big data and companies like Google, Netflix, and Amazon realizing considerable the value of large data coupled to machine-learning capabilities.

“So, ChatGPT dropped a little over a year ago, now. And it made big reverberations in the media. It represented a step forward in the AI capability space and can do things we couldn’t do before. That said, there’s plenty of AI technology that’s been around for decades and has been continually getting better over that time that’s still incredibly valuable that isn’t ChatGPT.”

How do we ensure that [the AI] extracted the information it should have and it didn’t extract some irrelevant information or it didn’t hallucinate something altogether.  These are well-known dangers of large language models, and so you have to test for them.

What changed when ChatGPT and generative AI came along? “The big change has been what are we going to do with those capabilities, and how can we use them to improve our customer’s lives, how can we use them to improve our capabilities and workflows? It’s been a lot of focused work on trying to figure out those things and trying to bring up demonstrations, capabilities, and alpha-level product features we can then put in the marketplace… We can then see whether they’re useful to our users and their staff.”

What is GenAI’s greatest potential? One capability I’ve heard from others is its ability to augment software development. Are you seeing that? “I think we’re still discovering that. We’ve looked at it from a software development assistance tool, and we’re quite excited by its capabilities in that space — especially when offered through some well-created user interfaces, such as Copilot, Codium, and a few others playing in that space. They’re essentially marketers of the underlying AI capability, but their value is in integration with powerful software development toolsets, like VS Code and so on.

“So, yes, there seems to be value there. That’s just one example of the space of using generative AI to assist in creating content — draft content for human review, and revision, and give people a starting place for content.

“The second category I think it’s very useful in our world is in summarization. One challenge physicians, nurses, and their staff face is an overwhelming tidal wave of information. I mentioned the hundreds of millions of fax documents a year, but that’s just a fraction of what comes through electronically. Then there’s the patient data itself; the individual patient charts. Every time we go to see a primary care physician it produces additional records about us.

“Our primary care physicians may know that information well, but as soon as you go see a specialist they have to review 20 years of case material. They can’t spend two hours reviewing my case history. They need to get it in 10 minutes or something like that. So being able to digest 20 years of material down to 10 minutes, that’s a capability that large language models do seem to offer and we’re very excited about it.”

Is there anything live that you’ve deployed that’s aimed at addressing the deluge of electronic data payors and providers are dealing with? “We have some things in the pilot. I can thumbnail at least three things that are going out in pilot to a limited number of customers now. One is the summarization tool. Specifically, summarizing patient records that we exchange from other EHRs [electronic health records]. We import your patient record from some other EHR to make it available to one of the physicians in our network; how do we digest it so that the provider can read it easily?

“Another capability is generating novel content…specifically when a patient sends a question or request to a provider’s office — usually through a patient portal. For example, a patient may ask if they can have an appointment next week. It turns out that the responses that providers create take up an enormous amount of time. If we can draft those ahead of time and say, here’s some text that’s a good starting point for you, that can shave some time off in the same way drafting computer code can shave time off for developers.

“There’s also a question involving prior authorizations. We help identify when there’s missing information in prior authorizations so that providers can fix it at the point when they’re creating the request rather than having it recycled through the system just to get rejected because of missing information. That creates delay and introduces more work for the provider and a delay for the patient. We can use the genAI systems to catch when there’s missing information right at the point of creation and get it corrected then rather than cycling it through the system.”

How does the genAI ID have missing information? “It uses the few-shot learning capability of large language models. You can show LLMs examples and they can emulate them. In this case, what we do…is they know what the prior authorization being requested is. Someone may ask for authorization to do cataract surgery, for example. So we have many records of those surgeries. We pull those records, put them in front of an [LLM], and say this is what it is supposed to look like and this is what the prior authorization that the physician produced looks like. Tell us where it’s off or where there are gaps. It will come back with, ‘Normally, people provide this information that you don’t have in this case.’”

Were you concerned that Athenahealth’s sensitive healthcare data would be used to train other LLMs outside of your organization, thereby making it public? “That is a concern when you’re dealing with healthcare data — that is the highest tier of sensitive data. So, we have to be very careful with how we protect our data and how we use our data. We had infosec involved and legal and procurement involved — all these people were involved in evaluating the contracts we had with Microsoft and OpenAI. What are the data pathways? What [are] the data security guardrails in place? We invested quite a bit of work to ensure the data was going to be secure and not used to train someone else’s model that would then be released in the wild.

“Along with infosec work, there was a lot of contractual work that was done to ensure that we’re consuming OpenAI’s systems and not feeding OpenAI’s systems.”

spot_img

latest articles

explore more