What is Amalgamated Learning?
Healthcare privacy is stimulating news ways of thinking about medical data in research.
Hey Guys,
I follow to some extent the advent of A.I. in healthcare. With BigTech companies like Apple, Google, Amazon, Microsoft and others getting more into healthcare, the intersection of privacy and machine learning in healthcare is also becoming clearer.
This short article will touch on Amalgamated learning with regards to a healthcare research setting.
Amalgamated learning is thought to be a potential solution to scale Medical A.I. If data is the new oil, our medical data will quickly become part of how BigTech companies personalize our healthcare of the future.
Since AI shows tremendous promise in discovering new patterns buried in mountains of data, in the health field things get more complicated due to so many variables.
In reality, some data remains isolated across various silos for technical, ethical and commercial reasons. A promising new AI and machine learning technique called amalgamated learning might help overcome these silos to find new cures for diseases, prevent fraud and improve industrial equipment. It may also provide a way to construct digital twins from inconsistent forms of data.
Overcome silos in data
Prevent fraud
Improve industrial equipment (Industrial Metaverse in healthcare)
Enable us to construct digital twins from inconsistent data.
In a bizarre world of environmental press, healthcare is not spared. Demand for AI continues to increase as patients expect a digital-first experience due to the COVID-19 pandemic, as well as the “great resignation” that has left every industry — including health care — short-staffed.
Augmenting Privacy in Medical Research
So on one end of the spectrum, new computing techniques like homomorphic encryption allow multiple participants to share data to collaborate on new AI models with high trust.
At the other end of the spectrum, federated learning techniques allow different participants to update a machine learning model locally without sending sensitive data to others.
Some see “Amalgamated Learning” as the best mid-point solution to these ends of the spectrum.
Homeomorphic encryption tends to add a lot of computational overhead.
Federated learning means in the case of medical privacy, only updates to the model are shared with others.
Why Amalgamated Learning?
Amalgamated learning
Amalgamated learning is now being used for large-scale cancer research. Like federated learning, it is much faster than homomorphic encryption and it does not require participants to share data.
Another benefit is that it supports multiple models, so the participants do not have to share the intellectual property baked into them with each other. This could encourage cross-industry medical research by competitors that improves the outcomes for everyone while also protecting commercial interests.
The technique seems to work even when each participant encodes data slightly differently. The key is that the technique takes advantage of differences detected within each local data set.
As a result, everyone could learn from the experience of others, even when their own hospital data collection procedures are different, as long as these procedures are internally consistent. “We think we won’t need to normalize data across parties as much to train a local model.”
One concern is that this amalgamated learning makes it harder to tease out bias or figure out how a model has reached a particular conclusion compared to traditional approaches. Consequently, they are focusing on using more explainable AI techniques that make it identify and audit the different factors that can affect results.
Digital Twins in the Future of Healthcare Research
Another benefit is that amalgamated learning will also help to customize digital twins of individuals, even when their local set points for things like temperature or other vital signs are slightly different.
A.I. has been sifting into healthcare more and more in recent years. AI will tackle more tractable problems like workflows before heading into the more difficult ones like diagnosis and prognosis. The importance of privacy and how to share sensitive medical data in the context of different EMR systems, apps and kinds of research is also a big deal.
Amalgamated machine learning for Healthcare
Related health data is sometimes dispersed over multiple data silos, each controlled by a different entity (GP, hospital, lab, health insurer, pharma company, …). While each of these entities can apply their own machine learning on their data, their models could potentially benefit from the data held in other silos.
However, for practical, business, IP or legal reasons, directly sharing the data (such as with federated data approaches) or the models (such as with federated learning) is often difficult.
PAML - Privacy Preserving Amalgamated Machine Learning
Privacy-preserving amalgamated machine learning (PAML), is an AI technique that lets each participating entity build their own model using only locally available data whilst indirectly incorporating information from other data silos in a way that doesn't compromise privacy.
PAML is increasingly being used in a clinical settings and being discussed in the broader applicability of PAML in the health ecosystem.
The use of digital twins in healthcare is revolutionizing clinical processes and hospital management by enhancing medical care with digital tracking and advancing modelling of the human body.
Amalgamated learning can thus augment our modeling of healthcare via digital twins while maintaining privacy ethics around patient data and healthcare data in medical research.
Healthcare is really fascinating for A.I. because it demonstrates the need for new solutions around privacy, the optimal ways of sharing data and using A.I. increasingly in medical research for improved patient-centric care.
If you are interested in premium access to more articles and want to support the channel, you can do so here.
Thanks for reading!