Healthcare Data Applications Use Cases
Dr. Evangelo Damigos; PhD | Head of Digital Futures Research Desk
- Competitive Differentiation
- Emerging Technologies
- Digital Transformation
Publication | Update: Sep 2020
Healthcare data are derived from numerous sources including electronic health records (EHRs), medical imaging, genomic sequencing, payor records, pharmaceutical research, wearables, and medical devices, is making it challenging to process, and hard for industry leaders to harness its significant promise to transform the industry..
According to the article, “Healthcare Big Data and the Promise of Value-Based Care,” in NEJM Catalyst Innovations in Care Delivery journalespite these challenges, several new technological improvements are allowing healthcare big data to be converted to useful, actionable information. By leveraging appropriate software tools, big data is informing the movement toward value-based healthcare and is opening the door to remarkable advancements, even while reducing costs. With the wealth of information that healthcare data analytics provides, caregivers and administrators can now make better medical and financial decisions while still delivering an ever-increasing quality of patient care.
Early healthcare data applications include analytics and population health as well as applications around patient and supply chain cost
Population health solutions entail quality/cost monitoring and management tools and all that goes with it, including data cleanup, care coordination, patient communication/engagement, and patient education. Pop health solutions can also include rudimentary predictive analytics tools, such as the evolution of a patient along the risk curve, but these tools tend to be early days compared with enterprise-facing big-data solutions.
Apps for mobile devices, such as Aetna’s Triage, advise patients on their medical condition using aggregated data and can recommend patients seek medical care based on input to the app.
· In yet another of their healthcare data initiatives, Apple has teamed up with researchers at Stanford to determine if the Apple Watch’s heart sensor can be used to detect atrial fibrillation, a condition that causes the death of approximately 130,000 Americans each year. If the device proves successful at spotting the malady, Apple can notify wearers that they need to seek medical attention.
· Propeller Health uses a Bluetooth-enabled sensor that attaches to inhalers and spirometers for people with asthma or COPD. The company tracks the environmental conditions at sensor locations and sends reports to patients’ phones, so they can better understand the causes of their symptoms and take measures to prevent attacks. The company also sends reminders about when to take medications. With 34 peer-reviewed articles to date, Propeller reports patients are experiencing 79 percent fewer asthma attacks and are enjoying 50 percent more symptom-free days.
· Reducing prescription errors improves outcomes and saves lives. According to the Network for Excellence in Health Innovation, prescription errors cost some $ 21 billion per year, affecting more than 7 million U.S. patients and leading to 7,000 deaths. Israeli startup MedAware is partnering with healthcare organizations to deploy their decision support tool that uses big data to spot prescription errors before they occur.
According to Stephanie Demko, Citi’s U.S. Healthcare Technology Analyst, specialized analytics applications, particularly around patient and supply chain cost, have been another key use of the new healthcare data stores. Unlike the predictive enterprise big-data analytics solutions today, these applications tend to be more backward-looking to help healthcare providers monitor and manage trends.
Opportunities from this process acceleration include more robust disease detection tools, predictive diagnosis capabilities, and decision support tools.
Current Use Cases
Below are a few areas where big data is destined to transform healthcare.
Precision medicine, as envisioned by the National Institutes of Health, seeks to enroll one million people to volunteer their health information in the All of Us research program. That program is part of the NIH Precision Medicine Initiative. According to the NIH, the initiative intends to “understand how a person’s genetics, environment, and lifestyle can help determine the best approach to prevent or treat disease. The long-term goals of the Precision Medicine Initiative focus on bringing precision medicine to all areas of health and healthcare on a large scale.”
Certain functions in healthcare, like radiologists, are considered ‘good’ once they have a certain level of experience seeing cases. However, a human can never see as many cases as an electronic database, making this an ideal application for big data/AI. Given a large enough store of electronified x-rays, the interpretation of an x-ray becomes a search problem, with a big-data solution functioning as a search engine for the diagnosis.
Predictive Risk from Retinal Imaging
The retina offers a snapshot of a patient’s vascular system, but the data are often under-utilized as they are siloed within ophthalmology practice EHRs. Recent machine-learning applications have leveraged this information to predict the risk of heart disease, with early trials showing a higher accuracy rating than the trial’s clinician evaluations. Although this solution has yet to be used in a clinical setting, we view this as a near-term opportunity that will create a quicker, easier, and lower-friction (no blood test required) solution for evaluating a patient’s cardiovascular risk.
Wearables and IoT sensors, already noted above, have the potential to revolutionize healthcare for many patient populations—and to help people remain healthy. A wearable device or sensor may one day provide a direct, real-time feed to a patient’s electronic health records, which allows medical staff to monitor and then consult with the patient, either face-to-face or remotely.
Machine learning, a component of artificial intelligence, and one that depends on big data is already helping physicians improve patient care. IBM with its Watson Health computer system has already partnered with Mayo Clinic, CVS Health, Memorial Sloan Kettering Cancer Center, and others. Machine learning, together with healthcare big data analytics, multiply caregivers’ ability to enhance patient care.
Early applications of machine learning and natural language processing are currently being used to reduce cost within the payments integrity and claims processing vertical. Traditional claims processing is a labor-intensive solution, entailing a clinician reviewing a complex claims document that often encompasses dozens of pages. With the use of robotic process automation (RPA) and natural language processing (NLP), payers are able to highlight the areas of significance within the claims document and increase the findings rate for errors without increasing headcount.
Care Management Support
Barriers in healthcare can extend beyond the core condition and treatment, such as access to care, transportation, and readmissions risk.
Transportation is one of the largest barriers to care: low-income, elderly, and disabled patients miss ~24 million appointments annually due to insufficient access to transportation. Further, no-show appointments due to transportation barriers alone represent ~ billion in avoidable downstream costs and ~ billion in lost revenue for doctors.
Today, predictive analytics models can highlight patients at a higher risk of encountering transportation barriers by utilizing a patient’s socioeconomic data. For example, socioeconomic data might show that patients in a certain zip code are unlikely to have a car, alerting the care team to make arrangements for follow-up appointment transportation following a discharge, thus lowering downstream costs.
Reducing readmissions is another focus of care management support teams, with predictive models aiding in the allocation of people, process, and technology resources. Beyond allocation, these predictive analytics can also assist a care team in time management, such as the frequency and intensity of follow-ups based on a patient’s projected degree of risk.
Researchers and funding agencies recognize the benefit of integrating and sharing clinical research data to fill such “oceans.” For example, the Li Ka Shing Centre for Health Information and Discovery of the University of Oxford provides access to the UK Biobank and plans to add 50 million electronic patient records. In addition:
· The European Medical Information Framework (EMIF) aims to improve access to health data derived from the electronic health records of some 50 million Europeans, as well as cohort datasets from participating research communities.
· Open PHACTS is a platform for researchers and others who need access to pharmacological data. It was built in cooperation with academic and commercial organizations and allows users to extract information and make decisions on complex pharmacologic matters.
· A division of the Dutch multi-national company, N.V. Philips, has aggregated more than 15 petabytes of data taken from 390 million medical records, patient inputs, and imaging studies. Healthcare personnel can access this massive collection to obtain critical data for informing the clinical decision-making process.
· In the U.S., the National Institute of Health established the Big Data to Knowledge (BD2K) program designed to bring biomedical big data to researchers, clinicians, and others. Initiatives such as these will increasingly empower healthcare providers to improve patient care while simultaneously countering the unsustainable cost trajectory. They will also provide researchers with a rich universe of accessible data and information for disease prevention and cure.
Objectives and Study Scope
This study has assimilated knowledge and insight from business and subject-matter experts, and from a broad spectrum of market initiatives. Building on this research, the objectives of this market research report is to provide actionable intelligence on opportunities alongside the market size of various segments, as well as fact-based information on key factors influencing the market- growth drivers, industry-specific challenges and other critical issues in terms of detailed analysis and impact.
The report in its entirety provides a comprehensive overview of the current global condition, as well as notable opportunities and challenges.
The analysis reflects market size, latest trends, growth drivers, threats, opportunities, as well as key market segments. The study addresses market dynamics in several geographic segments along with market analysis for the current market environment and future scenario over the forecast period.
The report also segments the market into various categories based on the product, end user, application, type, and region.
The report also studies various growth drivers and restraints impacting the market, plus a comprehensive market and vendor landscape in addition to a SWOT analysis of the key players. This analysis also examines the competitive landscape within each market. Market factors are assessed by examining barriers to entry and market opportunities. Strategies adopted by key players including recent developments, new product launches, merger and acquisitions, and other insightful updates are provided.
Research Process & Methodology
We leverage extensive primary research, our contact database, knowledge of companies and industry relationships, patent and academic journal searches, and Institutes and University associate links to frame a strong visibility in the markets and technologies we cover.
We draw on available data sources and methods to profile developments. We use computerised data mining methods and analytical techniques, including cluster and regression modelling, to identify patterns from publicly available online information on enterprise web sites.
Historical, qualitative and quantitative information is obtained principally from confidential and proprietary sources, professional network, annual reports, investor relationship presentations, and expert interviews, about key factors, such as recent trends in industry performance and identify factors underlying those trends - drivers, restraints, opportunities, and challenges influencing the growth of the market, for both, the supply and demand sides.
In addition to our own desk research, various secondary sources, such as Hoovers, Dun & Bradstreet, Bloomberg BusinessWeek, Statista, are referred to identify key players in the industry, supply chain and market size, percentage shares, splits, and breakdowns into segments and subsegments with respect to individual growth trends, prospects, and contribution to the total market.
Research Portfolio Sources:
Global Business Reviews, Research Papers, Commentary & Strategy Reports
M&A and Risk Management | Regulation
The future outlook “forecast” is based on a set of statistical methods such as regression analysis, industry specific drivers as well as analyst evaluations, as well as analysis of the trends that influence economic outcomes and business decision making.
The Global Economic Model is covering the political environment, the macroeconomic environment, market opportunities, policy towards free enterprise and competition, policy towards foreign investment, foreign trade and exchange controls, taxes, financing, the labour market and infrastructure. We aim update our market forecast to include the latest market developments and trends.
Review of independent forecasts for the main macroeconomic variables by the following organizations provide a holistic overview of the range of alternative opinions:
As a result, the reported forecasts derive from different forecasters and may not represent the view of any one forecaster over the whole of the forecast period. These projections provide an indication of what is, in our view most likely to happen, not what it will definitely happen.
Short- and medium-term forecasts are based on a “demand-side” forecasting framework, under the assumption that supply adjusts to meet demand either directly through changes in output or through the depletion of inventories.
Long-term projections rely on a supply-side framework, in which output is determined by the availability of labour and capital equipment and the growth in productivity.
Long-term growth prospects, are impacted by factors including the workforce capabilities, the openness of the economy to trade, the legal framework, fiscal policy, the degree of government regulation.
Direct contribution to GDP
The method for calculating the direct contribution of an industry to GDP, is to measure its ‘gross value added’ (GVA); that is, to calculate the difference between the industry’s total pretax revenue and its total boughtin costs (costs excluding wages and salaries).
Forecasts of GDP growth: GDP = CN+IN+GS+NEX
GDP growth estimates take into account:
All relevant markets are quantified utilizing revenue figures for the forecast period. The Compound Annual Growth Rate (CAGR) within each segment is used to measure growth and to extrapolate data when figures are not publicly available.
Our market segments reflect major categories and subcategories of the global market, followed by an analysis of statistical data covering national spending and international trade relations and patterns. Market values reflect revenues paid by the final customer / end user to vendors and service providers either directly or through distribution channels, excluding VAT. Local currencies are converted to USD using the yearly average exchange rates of local currencies to the USD for the respective year as provided by the IMF World Economic Outlook Database.
Industry Life Cycle Market Phase
Market phase is determined using factors in the Industry Life Cycle model. The adapted market phase definitions are as follows:
The Global Economic Model
The Global Economic Model brings together macroeconomic and sectoral forecasts for quantifying the key relationships.
The model is a hybrid statistical model that uses macroeconomic variables and inter-industry linkages to forecast sectoral output. The model is used to forecast not just output, but prices, wages, employment and investment. The principal variables driving the industry model are the components of final demand, which directly or indirectly determine the demand facing each industry. However, other macroeconomic assumptions — in particular exchange rates, as well as world commodity prices — also enter into the equation, as well as other industry specific factors that have been or are expected to impact.
Forecasts of GDP growth per capita based on these factors can then be combined with demographic projections to give forecasts for overall GDP growth.
Wherever possible, publicly available data from ofﬁcial sources are used for the latest available year. Qualitative indicators are normalised (on the basis of: Normalised x = (x - Min(x)) / (Max(x) - Min(x)) where Min(x) and Max(x) are, the lowest and highest values for any given indicator respectively) and then aggregated across categories to enable an overall comparison. The normalised value is then transformed into a positive number on a scale of 0 to 100. The weighting assigned to each indicator can be changed to reﬂect different assumptions about their relative importance.
The principal explanatory variable in each industry’s output equation is the Total Demand variable, encompassing exogenous macroeconomic assumptions, consumer spending and investment, and intermediate demand for goods and services by sectors of the economy for use as inputs in the production of their own goods and services.
Elasticity measures the response of one economic variable to a change in another economic variable, whether the good or service is demanded as an input into a final product or whether it is the final product, and provides insight into the proportional impact of different economic actions and policy decisions.
Demand elasticities measure the change in the quantity demanded of a particular good or service as a result of changes to other economic variables, such as its own price, the price of competing or complementary goods and services, income levels, taxes.
Demand elasticities can be influenced by several factors. Each of these factors, along with the specific characteristics of the product, will interact to determine its overall responsiveness of demand to changes in prices and incomes.
The individual characteristics of a good or service will have an impact, but there are also a number of general factors that will typically affect the sensitivity of demand, such as the availability of substitutes, whereby the elasticity is typically higher the greater the number of available substitutes, as consumers can easily switch between different products.
The degree of necessity. Luxury products and habit forming ones, typically have a higher elasticity.
Proportion of the budget consumed by the item. Products that consume a large portion of the consumer’s budget tend to have greater elasticity.
Elasticities tend to be greater over the long run because consumers have more time to adjust their behaviour.
Finally, if the product or service is an input into a final product then the price elasticity will depend on the price elasticity of the final product, its cost share in the production costs, and the availability of substitutes for that good or service.
Prices are also forecast using an input-output framework. Input costs have two components; labour costs are driven by wages, while intermediate costs are computed as an input-output weighted aggregate of input sectors’ prices. Employment is a function of output and real sectoral wages, that are forecast as a function of whole economy growth in wages. Investment is forecast as a function of output and aggregate level business investment.