• Skip to main content

Paragon Consulting Partners

Healthcare IT Solutions and Results

  • Home
  • Services
    • Healthcare Providers
    • Enterprise Imaging Vendors
    • Healthcare Investors
    • Analytics as a Managed Service
  • About
  • News & Blog
  • Resources
  • Contact
  • LinkedIn
  • Twitter
  • Facebook
  • YouTube

data migration

February 4, 2019 By Laurie Lafleur Leave a Comment

Data normalization is an essential activity whenever two or more previously independent facilities are migrated into single, consolidated data repository, as it preserves the integrity of incoming data by avoiding broken links between patient records and problematic collisions between device and study identifiers. 

Jude Mosley, Manager of Information Technology, Imaging at Sentara Healthcare knows this better than anyone. She and her team face this very challenge regularly whenever a new hospital or imaging center is integrated into the 5-hospital network in Norfolk, Virginia. 

Hey, that’s my jacket!

The first challenge is ensuring patient jackets from differing systems are reconciled correctly so that no information was lost or attached erroneously to the wrong patient. Using a master patient index (MPI), Sentara is able to match patients between the source and destination systems and update all incoming studies with the correct patient identifier upon ingestion. This results in a single, consistent set of MRNs across the Sentara network, and eliminates future risk that an old MRN could be mistakenly used. 

The next challenge is ensuring study accession numbers remain unique, and that the migration process doesn’t introduce duplicate accession numbers for different patients. To accomplish this, Sentara employed normalization policies that pre-fixed the accession number and updated the location attribute to a 3-character code representing the source facility for all incoming studies. Not only did this avoid potential collisions between accession numbers, it also added a level of transparency to the migrated data that allowed Sentara to quickly and easily identify its original source. 

The Big Bang theory of modalities

While the migration process provides an excellent opportunity to normalize data on-the-fly, it is often necessary to enforce data normalization policies at the modalities themselves. At Sentara, normalization at the modality level is a well-defined process. With each new modality introduction, update, or site acquisition Sentara ensures that every affected modality adheres to a consistent set of pre-defined policies regarding AE title and station naming conventions. When new modalities are installed, or vendor updates are applied, Sentara requires the vendors to be aware of and sign off on the required data policies to ensure they’re properly applied and thoroughly tested after the installation is complete to maintain adherence.

For larger-scale activities, like a new site acquisition and migration, the PACS team prepares a comprehensive list of all modalities and their new AE titles, station names, and IP addresses, and orchestrates a big-bang update process. While this is no small undertaking, through experience Sentara has refined this process to run like a well-oiled machine. Once complete, Sentara has improved the consistency and supportability of their now-larger infrastructure, and once again ensured that data arrives in a predictable and consistent manner. 

A shining example of normalcy

This case provides excellent examples of how data normalization can address the integration challenges faced by expanding health systems. Not only does it mitigate risk by avoiding data loss and collisions during the migration process, it also measurably improves data quality and reduces support costs in the future by improving the consistency and predictability of systems and data across the entire network.

Our next blog will be taking a more clinical focus, looking at how data normalization can be leveraged to uncover deep insights across population-wide data sources. Sound interesting? Subscribe to our blog to be notified when the next post becomes available!


Filed Under: Data Management Tagged With: data migration, data normalization, Enterprise Imaging, health data, Health IT, healthcare data, healthcare IT, HealthIT

January 28, 2019 By Laurie Lafleur Leave a Comment

Normalization has become an essential activity for any data-driven organization. While it does require an investment in time and resources to define and apply the policies, structures, and values you will find that it is well worth the effort. Not only will you see measurable improvements in the quality and efficiency of clinical workflow, stakeholder satisfaction, and your bottom line – you will also unlock the untapped potential of what could prove to be one of your organization’s biggest assets – your data! That being said, let’s take a look at the various methods that can be used to make sure your data falls in line and conforms to your new norms.

What does normal look like anyway?

The first step is identifying and defining the attributes to be normalized. This begins with a close look at your organization’s key challenges and goals. Having trouble with data integrity, fragmentation, and collisions? Take a look at how unique ID’s are assigned, reconciled, and managed at the facility, system, device, and patient levels. Hearing lots of complaints from your clinical team about broken and inefficient workflow? Consider looking at the variations in procedure, exam, and diagnostic naming conventions across your enterprise. Once the troublesome attributes have been defined key stakeholders should be engaged to define what the ‘gold standard’ values should be, and map these to the ‘dirty’ values that are plaguing your system.

Keeping it clean on-the-fly

Once the normalized values have been defined transient systems, such as DICOM and HL7 routers, or destination systems, such as EHR/EMRs, Picture Archiving and Communication Systems (PACS) or Vendor Neutral Archives (VNA), can often be configured to inspect the data as it arrives to dynamically identify and adjust any inconsistencies and ensure what is stored and shared adheres to normalization policies. This is accomplished through a normalization or ‘tag morphing’ rules engines that are able to inspect incoming data on-the-fly, identify deviations from normalization policies using pattern matching algorithms, and apply the appropriate transformations based on the pre-defined set of consistent values. This ensures incoming data is clean, consistent, and reliable – regardless of its original source or format.

As well, it enables rapid integration of outside facilities and systems resulting from mergers or acquisitions as new systems and modalities can be integrated quickly without requiring updates to hanging protocols, routing and retention policies, reporting systems, etc.

Finally, it mitigates the impact of any unforeseen changes that may occur due to vendor updates at the modality, which can sometimes alter attribute values such as series descriptions. This is most common among complex multi-slice acquisition devices, and ultimately results in broken hanging protocols and frustrated radiologists.

Garbage in, garbage out

In some cases, it may be necessary to enforce data normalization policies at the modalities themselves. This is especially important if receiving systems do not provide robust tag morphing capabilities, leaving you without the ability to enforce normalization policies on-the-fly. As well, if you find that your technologists are frequently required to manually enter or adjust data values at the modality, then added variability and potential for error in the resulting data sets are more likely to occur. This may not always be caught by a discerning rules engine. In either case, why not take the opportunity to ensure your modalities are sending clean data from the get-go? As the old adage says: garbage in, garbage out.

When you’re going where the grass is greener

If you’re considering a retiring and replacing ageing systems, data migrations present an excellent opportunity to clean and normalize existing data as it moves between systems, providing immediate benefits like better access and filtering of relevant priors, improved reading efficiency through consistent and reliable hanging protocols, and the ability to incorporate historic data into analytics, AI, and deep learning applications.

As well, it positions you to minimize the effort involved in any future system replacement or migration activity by simplifying the configuration of any system features that rely on specific attribute values or structures to function effectively. For instance, hanging protocols are not typically transferrable between vendors’ systems and therefore need to be re-created whenever a PACS replacement or viewer change occurs. The consistency of normalized data facilitates rapid configuration of protocols within a new system, as the complexity associated with configuring multiple protocols for each distinct layout, or the viewer provided lexicons required to accommodate or ‘map’ the various combinations and permutations of inconsistent protocol, study, or series information is eliminated. The same holds true for other attribute-driven features including but not limited to reading worklists, routing or forwarding rules, information lifecycle management policies, and analytics and deep learning.

That sounds like a lot of work…

More often than not the perceived effort of defining such practices and data structures seems overwhelming to many already busy IT departments and project teams. This often results in a choice to forego data normalization activities in favour of saving some time and effort, or simply due to lack of financial and human resources.

While it may be true that data normalization is no small task, the benefits far outweigh the cost of the initial investment, and many organizations are now realizing the strategic value of data normalization initiatives. The heaviest lift, by far, is the process of gathering and analyzing existing data and distilling it into normalized data structures and values, which is a one-time effort that will yield immediate and recurring dividends by creating an actionable data repository that supports clinical and business continuous improvement initiatives.

By now you might be thinking, “this sounds good in theory, but it’s pretty anecdotal. Where’s the real-life evidence that data normalization is worth the effort?”

Our next posts will include real-world examples of how some luminary healthcare organizations have leveraged data normalization to achieve a variety of measurable benefits. Subscribe to our blog to be notified when the next post becomes available!


Filed Under: Analytics, Data Management, Healthcare IT, Workflow Tagged With: data migration, data normalization, Enterprise Imaging, health data, Health IT, healthcare data, healthcare IT

Copyright © 2023 · Paragon Consulting Partners, LLC · 500 Capitol Mall, Suite 2350, Sacramento, CA 95814 | 916-382-8934