• Skip to main content

Paragon Consulting Partners

Healthcare IT Solutions and Results

  • Home
  • Services
    • Healthcare Providers
    • Enterprise Imaging Vendors
    • Healthcare Investors
    • Analytics as a Managed Service
  • About
  • News & Blog
  • Resources
  • Contact
  • LinkedIn
  • Twitter
  • Facebook
  • YouTube

Healthcare IT

October 1, 2019 By Laurie Lafleur Leave a Comment

Accurately predict future events to mitigate disease, catastrophes, and natural disasters using tarot-based analytics powered by magic 8-ball technology.

It’s true, today’s clinical and business intelligence can deliver deep insights that can be used for disease mitigation and operational optimization. However, the ability of your analytics software and programs to deliver on lofty marketing promises is only as good as your underlying data strategy. Here are a few considerations to keep in mind before embarking on an analytics journey:

  1. You can’t measure what you don’t know: The first, and most important step in building a successful analytics program is understanding your current state environment, gaps, and challenges. Defining meaningful key performance indicators (KPIs) is not always simple but will enable you to measure improvement over time and ensure you’re tracking towards your organizational goals. 
  2. Your results are only as good as the quality of your data: Getting meaningful and actionable insights requires aggregation of data across many disparate and heterogeneous locations, systems, and formats. Be sure to have a flexible and scalable data model, a data normalization strategy in place, and carefully and regularly evaluate the quality, consistency, and integrity of your data to ensure accurate and consistent results over time. 
  3. The single biggest problem in communication is the illusion that it has taken place: Having scads of clinical and operational metrics is awesome, but is essentially useless unless you can deliver those insights to the right people, at the right time, and in a format that can be easily consumed and acted upon. This starts with having clear organizational objectives and fostering a data-driven culture where information is distilled and shared according to the communication methods that work best for each of your key stakeholder groups. 

Are you in the market for an enterprise analytics solution? We can help you separate truth from fiction and select a strategy, technology, and vendor that will best fit your organizational capabilities and needs. Contact us to setup a meeting at RSNA 2019.

Love the logo? Contact us for details on how to order your limited edition tee.

If you enjoyed this post subscribe to our blog to be notified when new articles are published.


Filed Under: Analytics, Data Management, Healthcare IT, Imaging

September 24, 2019 By Laurie Lafleur Leave a Comment

A platform so lightweight it reduces the footprint of all the other applications on your device!

If you think this next-generation spin on zero-footprint (ZFP) technology sounds too far-fetched to be true, you’re right. What you may not realize, however, is that some vendor claims can also stretch the truth at times. While beneficial for providing anytime, anywhere access to patient images and reports there are important considerations and trade-offs that should be considered when evaluating these technologies:

  1. Is it truly ZFP? Some viewers that are marketed as ZFP may have 3rd party dependencies and/or require a small download in order to perform certain workflow functions, such as integration with dictation systems or multi-monitor support. If you are hoping to avoid installations at the desktop be sure to closely evaluate the application’s technical requirements and any associated trade-offs or limitations. 
  2. Are the images always presented in full fidelity? In order to overcome performance challenges, some applications stream lossy images first and fill-in lossless over time. This may be acceptable for some reference-based use cases but are not ideal for diagnostic interpretation. Be sure to evaluate this in line with your current and future workflow and clinical needs.
  3. How are the images rendered and delivered to the end user? Image processing is typically performed in one of three ways: server-side (leveraging server GPU or CPU resources), client-side (typically leveraging local CPU resources), or a hybrid combination of both. Each delivery method has pros and cons related to network requirements, hardware cost, and overall performance. Be sure to consider your infrastructure capabilities and budget, and closely evaluate image delivery and manipulation performance when determining which ZFP application is the best fit for your organization. 

Are you in the market for a ZFP enterprise solution? We can help you separate truth from fiction and select a vendor and technology that will best fit your organization’s unique capabilities and needs. Contact us to start the discussion.

Love the logo? Contact us for details on how to order your limited edition tee.

If you enjoyed this post subscribe to our blog to be notified when new articles are published.


Filed Under: Healthcare IT, Imaging

April 13, 2019 By Laurie Lafleur Leave a Comment

Much excitement in healthcare today revolves around the unlocked potential of population-level ‘big data’, which can be leveraged to inform diagnosis and treatment best practices, enable earlier intervention or proactive mitigation of disease, and support the development of new and innovative medical procedures and interventions. 

A luminary teaching and research organization, the National Institutes of Health (NIH) is an excellent example, conducting numerous research initiatives such as disease prevention and treatment, oncology, precision medicine, and neurotechnology, to name a few. Data normalization plays a significant role in supporting these initiatives by creating structured and consistent datasets to that can be leveraged by data mining technologies that enable the processing and analysis of the significant volumes of information required to perform this type of population-level research – a task that would insurmountable if not automated.

Cleaning and Screening

At NIH data analysis begins with the screening and selection of research patients who meet the specific protocol requirements for various ongoing clinical studies. This involves collection and evaluation of patients’ complete medical records including family and medical history, EMR data, imaging records, and much more to identify relevant genetic characteristics, demographic factors, disease profiles, and health statuses, etc. 

The NIH screens thousands of patient candidates, and as such have sophisticated methods to collate, standardize, and analyze the aforementioned data. First, patient identifiers are normalized upon ingestion and a temporary ‘pre-admit’ MRN is assigned to ensure consistency across diverse data sets and facilitate longitudinal analysis of the complete patient record by NIH systems and researchers throughout the screening process. This also ensures candidate data is kept separate from approved, active research datasets until the patient has been officially selected – at which time the MRN is normalized once again to an active ‘admitted’ profile.

The Role of Diagnostic Imaging

Imaging data is a key component to the research performed at the NIH. As such, researchers collect patients’ previous imaging from various outside facilities. Key imaging attributes on imported studies, such as study and series descriptors, are normalized according to NIH policies to enable automated data analysis and simplify the screening process for researchers by ensuring exams hang reliably within their diagnostic viewers for quick and efficient review. 

As well, the NIH often performs additional advanced imaging exams throughout clinical research projects. To ensure this newly-generated data accurately correlates with the patient’s prior imaging history, can be easily inspected by advanced data analysis technologies, and enables efficient review workflows for researchers the NIH also enforces normalization policies at the modality level. 

Because the NIH works with a large number of trainees and fellows the aforementioned normalization of diagnostic imaging exams provides the added benefit of creating a consistent educational environment for teachers and students, supporting structured and replicable teaching workflows.

Leveraging and Developing Artificial Intelligence and Machine Learning

Artificial Intelligence (AI) and machine learning (ML) may be new entrants into mainstream clinical settings, but to the NIH they have been in use for many years already and represent foundational technologies that play a significant role in enabling automatic identification and comparison key data elements across hundreds or thousands of relevant research studies, which would not be possible if done manually. 

However, there are some data elements that cannot yet be fully analyzed by these technologies. For those where bias or variability between reported values or criteria can exist, manual intervention is still required to make appropriate adjustments based on interpretation of the context within which the values were taken. This is the tricky thing about data normalization – it’s often context sensitive and the level of reasoning required to consider the ultimate research question or goal when evaluating data relevance and normalization needs cannot yet be reliably accomplished by machines. 

For this reason, the NIH continues to support the development and refinement of AI and ML algorithms by leveraging their enormous collection of clean and consistent data to build diverse training environments and collaborating with luminary AI and ML technology organizations to support the further development of advanced context-aware data analysis and clinical use cases. 

Anonymization/De-identification

Another critical aspect of the research performed at the NIH is the anonymization and/or de-identification of personally identifiable information (PII).  In order to adhere to patient privacy and human subjects research protection regulations, at times, the NIH wishes to, or is required to de-identify research data by removing or obfuscating identifiable data that could otherwise reveal a patient’s identity.

This might be done to allow the NIH researchers to conduct secondary analyses without additional human subjects’ protections approvals (per 45 C.F.R. 46) or in order to share data with other research collaborators. The NIH accomplishes this through standard de-identification or coding techniques and normalization policies with the goal of scrubbing data to remove identifiers.  

However, the NIH employs a specialized technique that ensures a link persists between the patient’s anonymized/de-identified and identifiable data through a process defined as an ‘Honest Broker’. This ensures researchers have the ability to follow-up with patients for care-related issues and/or can continue to follow outside patients for additional or future research purposes with appropriate safeguards and approvals.

The real value of data normalization

From tracking tumor response to new treatment programs to developing statistical models for a population’s health characteristics and risk profiles, data curation on the scale achieved by the NIH not only requires the collection of vast amounts of scattered information for analysis, but also mechanisms to transform and normalize incoming data to create well-defined and comparable datasets. For the NIH, data normalization enables the extraction of valuable and predictive clinical insights that can be used to improve both clinical outcomes and population-wide health. 

If you enjoyed this post subscribe to our blog to be notified when new articles are published.


Filed Under: Analytics, Data Management, Healthcare IT, Imaging, Uncategorized Tagged With: Clinical Research, data normalization, Enterprise Imaging, health data, Health IT, healthcare data, healthcare IT, HealthIT, medical imaging data, radiology data

January 28, 2019 By Laurie Lafleur Leave a Comment

Normalization has become an essential activity for any data-driven organization. While it does require an investment in time and resources to define and apply the policies, structures, and values you will find that it is well worth the effort. Not only will you see measurable improvements in the quality and efficiency of clinical workflow, stakeholder satisfaction, and your bottom line – you will also unlock the untapped potential of what could prove to be one of your organization’s biggest assets – your data! That being said, let’s take a look at the various methods that can be used to make sure your data falls in line and conforms to your new norms.

What does normal look like anyway?

The first step is identifying and defining the attributes to be normalized. This begins with a close look at your organization’s key challenges and goals. Having trouble with data integrity, fragmentation, and collisions? Take a look at how unique ID’s are assigned, reconciled, and managed at the facility, system, device, and patient levels. Hearing lots of complaints from your clinical team about broken and inefficient workflow? Consider looking at the variations in procedure, exam, and diagnostic naming conventions across your enterprise. Once the troublesome attributes have been defined key stakeholders should be engaged to define what the ‘gold standard’ values should be, and map these to the ‘dirty’ values that are plaguing your system.

Keeping it clean on-the-fly

Once the normalized values have been defined transient systems, such as DICOM and HL7 routers, or destination systems, such as EHR/EMRs, Picture Archiving and Communication Systems (PACS) or Vendor Neutral Archives (VNA), can often be configured to inspect the data as it arrives to dynamically identify and adjust any inconsistencies and ensure what is stored and shared adheres to normalization policies. This is accomplished through a normalization or ‘tag morphing’ rules engines that are able to inspect incoming data on-the-fly, identify deviations from normalization policies using pattern matching algorithms, and apply the appropriate transformations based on the pre-defined set of consistent values. This ensures incoming data is clean, consistent, and reliable – regardless of its original source or format.

As well, it enables rapid integration of outside facilities and systems resulting from mergers or acquisitions as new systems and modalities can be integrated quickly without requiring updates to hanging protocols, routing and retention policies, reporting systems, etc.

Finally, it mitigates the impact of any unforeseen changes that may occur due to vendor updates at the modality, which can sometimes alter attribute values such as series descriptions. This is most common among complex multi-slice acquisition devices, and ultimately results in broken hanging protocols and frustrated radiologists.

Garbage in, garbage out

In some cases, it may be necessary to enforce data normalization policies at the modalities themselves. This is especially important if receiving systems do not provide robust tag morphing capabilities, leaving you without the ability to enforce normalization policies on-the-fly. As well, if you find that your technologists are frequently required to manually enter or adjust data values at the modality, then added variability and potential for error in the resulting data sets are more likely to occur. This may not always be caught by a discerning rules engine. In either case, why not take the opportunity to ensure your modalities are sending clean data from the get-go? As the old adage says: garbage in, garbage out.

When you’re going where the grass is greener

If you’re considering a retiring and replacing ageing systems, data migrations present an excellent opportunity to clean and normalize existing data as it moves between systems, providing immediate benefits like better access and filtering of relevant priors, improved reading efficiency through consistent and reliable hanging protocols, and the ability to incorporate historic data into analytics, AI, and deep learning applications.

As well, it positions you to minimize the effort involved in any future system replacement or migration activity by simplifying the configuration of any system features that rely on specific attribute values or structures to function effectively. For instance, hanging protocols are not typically transferrable between vendors’ systems and therefore need to be re-created whenever a PACS replacement or viewer change occurs. The consistency of normalized data facilitates rapid configuration of protocols within a new system, as the complexity associated with configuring multiple protocols for each distinct layout, or the viewer provided lexicons required to accommodate or ‘map’ the various combinations and permutations of inconsistent protocol, study, or series information is eliminated. The same holds true for other attribute-driven features including but not limited to reading worklists, routing or forwarding rules, information lifecycle management policies, and analytics and deep learning.

That sounds like a lot of work…

More often than not the perceived effort of defining such practices and data structures seems overwhelming to many already busy IT departments and project teams. This often results in a choice to forego data normalization activities in favour of saving some time and effort, or simply due to lack of financial and human resources.

While it may be true that data normalization is no small task, the benefits far outweigh the cost of the initial investment, and many organizations are now realizing the strategic value of data normalization initiatives. The heaviest lift, by far, is the process of gathering and analyzing existing data and distilling it into normalized data structures and values, which is a one-time effort that will yield immediate and recurring dividends by creating an actionable data repository that supports clinical and business continuous improvement initiatives.

By now you might be thinking, “this sounds good in theory, but it’s pretty anecdotal. Where’s the real-life evidence that data normalization is worth the effort?”

Our next posts will include real-world examples of how some luminary healthcare organizations have leveraged data normalization to achieve a variety of measurable benefits. Subscribe to our blog to be notified when the next post becomes available!


Filed Under: Analytics, Data Management, Healthcare IT, Workflow Tagged With: data migration, data normalization, Enterprise Imaging, health data, Health IT, healthcare data, healthcare IT

January 14, 2019 By Laurie Lafleur Leave a Comment

Why Be Normal?

There are many times when it’s important to be unique and stand out. For example, when attending a job interview, showing off your costume for a Halloween competition, or when you’re auditioning for American Idol. The creation of healthcare data, however, is not one of them.

Unfortunately, unique is often the default state of imaging and other health data as it is generated across modalities, systems, departments, and facilities – where the presence of diverse vendors and local policies result in bespoke data management practices and attribute values such as procedure or study names, series descriptors, disease characteristics, and in which format, tag, or sequence data is stored. Such inconsistency in data structure and content leads to a number of workflow and operational challenges, and significantly reduces the value of underlying health data, for instance:

  1. It complicates the creation and maintenance of reliable and consistent hanging protocols that are required for efficient reading workflow, forcing a never ending and hard to manage set of rules that require a complex set of rules to maintain consistent hanging protocols
  2. It limits the ability to effectively curate and analyze data for clinical and business improvement purposes
  3. It inhibits effective artificial intelligence (AI) and machine learning algorithm training
  4. It results in difficult and costly migration implications when considering system retirement or replacement

Data normalization – the process of defining standards and applying them to the structure and content of health data – overcomes these challenges by ensuring incoming data arrives in a consistent and predictable manner. The resulting clean, standardized data can be leveraged to:

  1. Inform continuous improvement initiatives to improve workflow efficiency, quality, and cost
  2. Better support interoperability between existing applications, and simplifies implementation and integration of new enterprise imaging systems
  3. Reduce the cost and complexity of future data migration projects
  4. Allow data to be more easily be inspected and mined to unlock valuable insights at departmental, organizational, and population levels

The value of being normal(ized)

Whether undertaken as part of a larger Enterprise Imaging initiative, or a standalone project, data normalization has the potential to yield a huge return on investment. Not only can it realize measurable improvements in the quality and efficiency of clinical workflow, stakeholder satisfaction, and your bottom line, it can also unlock the untapped value of what could prove to be one of your organization’s biggest assets – your data.

You may be wondering – this all sounds good in theory, but how exactly can data be normalized and what are the practical business and clinical applications? To address these questions, we will be posting a series of articles that will explore the methods of data normalization and dive deeper into the clinical, operational, and fiscal use cases and benefits for data normalization at the enterprise, departmental, and modality level.

Subscribe to our blog to be notified when a new post is available and learn more about how data normalization can better support workflow optimization, interoperability, data curation and migration, artificial intelligence and machine learning, and data analysis and intelligence.

 

Filed Under: Analytics, Data Management, Healthcare IT, Imaging, Workflow

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to Next Page »

Copyright © 2023 · Paragon Consulting Partners, LLC · 500 Capitol Mall, Suite 2350, Sacramento, CA 95814 | 916-382-8934