• Skip to main content

Paragon Consulting Partners

Healthcare IT Solutions and Results

  • Home
  • Services
    • Healthcare Providers
    • Enterprise Imaging Vendors
    • Healthcare Investors
    • Analytics as a Managed Service
  • About
  • News & Blog
  • Resources
  • Contact
  • LinkedIn
  • Twitter
  • Facebook
  • YouTube

Workflow

November 13, 2019 By Laurie Lafleur 1 Comment

AI so fast it has time to read your images and clean your floors.

This tongue-in-cheek statement isn’t unlike the many other bold claims that are out there. The recent hype surrounding AI in healthcare is boundless, with promises that it can exponentially increase productivity, improve accuracy, and slash operational costs at a scale that was previously unfathomable. As a result, many healthcare organizations are actively developing strategies to incorporate AI into their technology roadmaps and clinical workflows, hoping to immediately reap the promised benefits.

Unfortunately, in many cases, implementation of AI in clinical practice has fallen short of expectations, proving to be more complicated, expensive, and cumbersome than originally advertised. This is largely due to an overarching perception in the industry that off-the-shelf AI applications can be procured and integrated ‘out-of-the-box’ into a variety of existing technologies, such as EHRs, PACS, and other data sources. The reality, however, is that successful adoption and deployment of AI requires careful evaluation of the following key considerations: 

1. Finding the right AI fit for your organization

There is a plethora of different AI algorithms out there – each with their own distinct use cases and value propositions including but not limited to: 

  • Predictive analytics to profile your organization’s capacity and performance potential in line with demand and growth patterns; 
  • Image analysis to automatically detect, escalate, and monitor abnormalities or disease; 
  • And proactive recommendations to treat or mitigate disease based on family, clinical, and social and environmental factors. 

Determining which AI applications will provide the most ‘bang for your buck’ requires thoughtful evaluation and identification of your organization’s own unique challenges and objectives. As well, it’s important to consider how AI will integrate into your existing technologies and day-to-day operations. Applications that fragment workflow or introduce cumbersome steps rarely achieve successful adoption – especially among busy clinicians. Be sure that your AI roadmap prioritizes applications that will bring meaningful and measurable benefits to address your burning platforms, and ensure your AI vendor has designed integrations, interfaces, and feedback loops that deliver a seamless and efficient user experience.

2. AI is only as good as its underlying data

AI models typically have specific requirements regarding the structure, content, and format of the data they are analyzing. Unfortunately, most healthcare organizations have a unique data fingerprint, with diverse technology ecosystems, image acquisition techniques, clinical documentation practices, and population characteristics that can introduce variability and negatively impact the accuracy of AI algorithms in clinical practice.

Careful analysis of each algorithm’s data requirements alongside your own data warehouse is required to determine whether there are any content or formatting gaps that will need to be addressed, and/or whether changes will be required to HL7/FHIR, DICOM, XDS, or other interfaces.

As well, it’s essential that your AI vendor has a strategy in-place to validate and, if necessary, re-train their algorithms against your own unique datasets before going live to ensure quality and accuracy of results.

Finally, it’s common – even expected – for changes in data structure and semantics to shift over time due to a number of factors such as process changes, introduction of new or updated imaging modalities, or changes in patient population characteristics, etc. It’s therefore critical that AI vendors have processes in place to proactively identify and accommodate these changes on an ongoing basis to ensure continued accuracy and efficacy within the live clinical environment.

3. Striking a cost-value balance

One of the biggest barriers to adoption for AI today continues to be the financial reimbursement model – or lack thereof. AI introduces additional operational, and sometimes capital costs, which in most cases do not realize a full return through billable outcomes.

While there are a few examples where computer-generated findings qualify for reimbursements (i.e. CAD for mammography), CMS has yet to provide direct reimbursement models for providers to bill for AI-rendered diagnostic interpretations or reviews. This doesn’t mean AI is a money pit – rather ROI is measured based on improved workflow and provider efficiency and accuracy, which in turn increases capacity (revenue) and decreases resource utilization, risk, and other ‘waste’ (costs).

As well, AI can automate risk stratification, data correlation, and reporting to help providers qualify for additional reimbursements, incentives, or even grants related to specific disease profiles and patient populations – a process that would be all but impossible if done manually due to the sheer volume and variability of the underlying data. The cost-value balance is unique for each healthcare organization and depends upon the opportunities, challenges, and priorities identified above. In any case, be sure to challenge your AI vendor to provide quantifiable evidence that they will be able to deliver the ROI you’re expecting – whatever that may be. 

Love the logo? Contact us for details on how to order your limited edition tee.

Are you ready to integrate AI into your organization? We can help you separate truth from fiction and select a strategy, technology, and vendor that will best fit your organizational capabilities and needs. Contact us to setup a meeting at RSNA 2019.


Filed Under: Artificial Intelligence, Healthcare IT, Imaging, Workflow

October 15, 2019 By Laurie Lafleur Leave a Comment

Tools you didn’t know you needed for imaging modalities that don’t even exist yet.

While it’s important (nay, essential) to keep an eye on the future and plan for new technologies, it’s equally important to ensure the technologies you have in place, or are considering introducing, will address your current challenges and bring immediate measurable return on investment in terms of care quality and efficiency, stakeholder satisfaction, and/or your financial bottom line. As such, it is necessary to perform a careful evaluation of your current-state workflow and technical ecosystem and design an Enterprise Imaging strategy that aligns with your near and long-term objectives, resource plan, and budget. The following considerations can assist you in your evaluation:

  1. Remember the Pareto Principle (80/20 rule): this states that in most cases 80% of your results will come from 20% of your activities. Or conversely, 80% of your problems stem from 20% of the root causes. In the imaging world this means incremental improvements should not be undervalued. Comprehensive workflow analysis can uncover inefficiencies, gaps, and opportunities for optimization that may not all require a heavy lift to address. Focusing on technologies that are equipped to optimize core workflows will often get you further than looking at bells and whistles that bring incremental value to only a few narrow use cases. 
  2. Don’t be blinded by the shiny objects: Speaking of bells and whistles, some vendor technologies appear to offer lots and lots of these and boast their ability to go broad and deep across the entire spectrum of imaging specialties – well beyond the traditional ‘ologies. What you have to determine is, while they may have lots of tools that tick many of your RFP boxes, how well do these tools work in reality? Do they adequately cover the breadth of functionality you require to truly integrate into or replace incumbent technologies? How reliable are they, and have they been proven in clinical practice? If not, are you willing to invest the time and resources to help your vendor overcome these hurdles and develop potentially disruptive technologies (because there are definitely pros and cons on each side of that fence)? Be sure to carefully evaluate the needs of your service lines and care providers, and consider your available resources when evaluating how many of these tools and features can be feasibly integrated into your workflow, which will make a real, measurable impact in your organization, and which ones are the ‘shiny objects’ to be avoided (at least for now). 
  3. Don’t get hit by the swinging pendulum: There’s been a lot of debate and shifting of opinions in the industry regarding which deployment model is best: best-of-breed, or single-vendor. While both have their merits, the real answer often lies somewhere in between. No one vendor yet provides all of the tools and features that will satisfy the bespoke needs of primary care providers, specialists, clinicians, patients, and other stakeholders across the care continuum. This means that in pretty much all cases you will be looking at some flavour of a multi-vendor solution. How much you can squeeze out of your primary vendor depends again on their capabilities, product maturity, and how these align with the unique needs of your particular organization. Try not to fall victim to the swinging ‘hype’ pendulum and force yourself into one model or the other – rather, take the time to properly assess your current and desired future states alongside the current and future capabilities of technology vendors, and look for a fit that will bring the most value today, while supporting your vision for tomorrow.

Are you in the market for an Enterprise Imaging or PACS replacement solution? We can help you separate truth from fiction and select a strategy, technology, and vendor that will best fit your organizational capabilities and needs. Contact us to setup a meeting at RSNA 2019.

Love the logo? Contact us for details on how to order your limited edition tee.

 If you enjoyed this post subscribe to our blog to be notified when new articles are published.


Filed Under: Healthcare IT, Imaging, Workflow

January 28, 2019 By Laurie Lafleur Leave a Comment

Normalization has become an essential activity for any data-driven organization. While it does require an investment in time and resources to define and apply the policies, structures, and values you will find that it is well worth the effort. Not only will you see measurable improvements in the quality and efficiency of clinical workflow, stakeholder satisfaction, and your bottom line – you will also unlock the untapped potential of what could prove to be one of your organization’s biggest assets – your data! That being said, let’s take a look at the various methods that can be used to make sure your data falls in line and conforms to your new norms.

What does normal look like anyway?

The first step is identifying and defining the attributes to be normalized. This begins with a close look at your organization’s key challenges and goals. Having trouble with data integrity, fragmentation, and collisions? Take a look at how unique ID’s are assigned, reconciled, and managed at the facility, system, device, and patient levels. Hearing lots of complaints from your clinical team about broken and inefficient workflow? Consider looking at the variations in procedure, exam, and diagnostic naming conventions across your enterprise. Once the troublesome attributes have been defined key stakeholders should be engaged to define what the ‘gold standard’ values should be, and map these to the ‘dirty’ values that are plaguing your system.

Keeping it clean on-the-fly

Once the normalized values have been defined transient systems, such as DICOM and HL7 routers, or destination systems, such as EHR/EMRs, Picture Archiving and Communication Systems (PACS) or Vendor Neutral Archives (VNA), can often be configured to inspect the data as it arrives to dynamically identify and adjust any inconsistencies and ensure what is stored and shared adheres to normalization policies. This is accomplished through a normalization or ‘tag morphing’ rules engines that are able to inspect incoming data on-the-fly, identify deviations from normalization policies using pattern matching algorithms, and apply the appropriate transformations based on the pre-defined set of consistent values. This ensures incoming data is clean, consistent, and reliable – regardless of its original source or format.

As well, it enables rapid integration of outside facilities and systems resulting from mergers or acquisitions as new systems and modalities can be integrated quickly without requiring updates to hanging protocols, routing and retention policies, reporting systems, etc.

Finally, it mitigates the impact of any unforeseen changes that may occur due to vendor updates at the modality, which can sometimes alter attribute values such as series descriptions. This is most common among complex multi-slice acquisition devices, and ultimately results in broken hanging protocols and frustrated radiologists.

Garbage in, garbage out

In some cases, it may be necessary to enforce data normalization policies at the modalities themselves. This is especially important if receiving systems do not provide robust tag morphing capabilities, leaving you without the ability to enforce normalization policies on-the-fly. As well, if you find that your technologists are frequently required to manually enter or adjust data values at the modality, then added variability and potential for error in the resulting data sets are more likely to occur. This may not always be caught by a discerning rules engine. In either case, why not take the opportunity to ensure your modalities are sending clean data from the get-go? As the old adage says: garbage in, garbage out.

When you’re going where the grass is greener

If you’re considering a retiring and replacing ageing systems, data migrations present an excellent opportunity to clean and normalize existing data as it moves between systems, providing immediate benefits like better access and filtering of relevant priors, improved reading efficiency through consistent and reliable hanging protocols, and the ability to incorporate historic data into analytics, AI, and deep learning applications.

As well, it positions you to minimize the effort involved in any future system replacement or migration activity by simplifying the configuration of any system features that rely on specific attribute values or structures to function effectively. For instance, hanging protocols are not typically transferrable between vendors’ systems and therefore need to be re-created whenever a PACS replacement or viewer change occurs. The consistency of normalized data facilitates rapid configuration of protocols within a new system, as the complexity associated with configuring multiple protocols for each distinct layout, or the viewer provided lexicons required to accommodate or ‘map’ the various combinations and permutations of inconsistent protocol, study, or series information is eliminated. The same holds true for other attribute-driven features including but not limited to reading worklists, routing or forwarding rules, information lifecycle management policies, and analytics and deep learning.

That sounds like a lot of work…

More often than not the perceived effort of defining such practices and data structures seems overwhelming to many already busy IT departments and project teams. This often results in a choice to forego data normalization activities in favour of saving some time and effort, or simply due to lack of financial and human resources.

While it may be true that data normalization is no small task, the benefits far outweigh the cost of the initial investment, and many organizations are now realizing the strategic value of data normalization initiatives. The heaviest lift, by far, is the process of gathering and analyzing existing data and distilling it into normalized data structures and values, which is a one-time effort that will yield immediate and recurring dividends by creating an actionable data repository that supports clinical and business continuous improvement initiatives.

By now you might be thinking, “this sounds good in theory, but it’s pretty anecdotal. Where’s the real-life evidence that data normalization is worth the effort?”

Our next posts will include real-world examples of how some luminary healthcare organizations have leveraged data normalization to achieve a variety of measurable benefits. Subscribe to our blog to be notified when the next post becomes available!


Filed Under: Analytics, Data Management, Healthcare IT, Workflow Tagged With: data migration, data normalization, Enterprise Imaging, health data, Health IT, healthcare data, healthcare IT

January 14, 2019 By Laurie Lafleur Leave a Comment

Why Be Normal?

There are many times when it’s important to be unique and stand out. For example, when attending a job interview, showing off your costume for a Halloween competition, or when you’re auditioning for American Idol. The creation of healthcare data, however, is not one of them.

Unfortunately, unique is often the default state of imaging and other health data as it is generated across modalities, systems, departments, and facilities – where the presence of diverse vendors and local policies result in bespoke data management practices and attribute values such as procedure or study names, series descriptors, disease characteristics, and in which format, tag, or sequence data is stored. Such inconsistency in data structure and content leads to a number of workflow and operational challenges, and significantly reduces the value of underlying health data, for instance:

  1. It complicates the creation and maintenance of reliable and consistent hanging protocols that are required for efficient reading workflow, forcing a never ending and hard to manage set of rules that require a complex set of rules to maintain consistent hanging protocols
  2. It limits the ability to effectively curate and analyze data for clinical and business improvement purposes
  3. It inhibits effective artificial intelligence (AI) and machine learning algorithm training
  4. It results in difficult and costly migration implications when considering system retirement or replacement

Data normalization – the process of defining standards and applying them to the structure and content of health data – overcomes these challenges by ensuring incoming data arrives in a consistent and predictable manner. The resulting clean, standardized data can be leveraged to:

  1. Inform continuous improvement initiatives to improve workflow efficiency, quality, and cost
  2. Better support interoperability between existing applications, and simplifies implementation and integration of new enterprise imaging systems
  3. Reduce the cost and complexity of future data migration projects
  4. Allow data to be more easily be inspected and mined to unlock valuable insights at departmental, organizational, and population levels

The value of being normal(ized)

Whether undertaken as part of a larger Enterprise Imaging initiative, or a standalone project, data normalization has the potential to yield a huge return on investment. Not only can it realize measurable improvements in the quality and efficiency of clinical workflow, stakeholder satisfaction, and your bottom line, it can also unlock the untapped value of what could prove to be one of your organization’s biggest assets – your data.

You may be wondering – this all sounds good in theory, but how exactly can data be normalized and what are the practical business and clinical applications? To address these questions, we will be posting a series of articles that will explore the methods of data normalization and dive deeper into the clinical, operational, and fiscal use cases and benefits for data normalization at the enterprise, departmental, and modality level.

Subscribe to our blog to be notified when a new post is available and learn more about how data normalization can better support workflow optimization, interoperability, data curation and migration, artificial intelligence and machine learning, and data analysis and intelligence.

 

Filed Under: Analytics, Data Management, Healthcare IT, Imaging, Workflow

Copyright © 2023 · Paragon Consulting Partners, LLC · 500 Capitol Mall, Suite 2350, Sacramento, CA 95814 | 916-382-8934