Data validations in NZDT — when runtime is more than a question of convenience

Henrik Saterdag
5 min readJan 4, 2021
Icons made by Freepik from Flaticon (https://www.flaticon.com/authors/freepik)

NZDT data migrations and conversions

For starters: NZDT is short for “Near Zero Downtime”. That term already explains the purpose: do a data migration or conversion with a business downtime as short as possible.

Business downtime hurts everyone. If you’re a customer that is not able to get the call center agent update your address data. If you’re a truck driver standing in front of a warehouse and the guys are not able to post a goods receipt. Or if you are responsible for your company’s IT and your boss just informed you that production worldwide stopped because of system unavailability — and that costs are in the millions. Per hour. Which will be withheld from your salary.

Here’s where NZDT comes into play.

You can find a pretty good description of what and how SAP NZDT does here: https://blogs.sap.com/2020/05/12/nzdt-downtime-approach-for-sap-s-4hana-coversion-customer-case/

In very short: SAP NZDT allows to start migration/conversion of the data already during regular business up-time. As soon as it’s switched on, it will track any changes done by the regular business usage on DB level (which is by the way the same implementation that is used by the SLT replication server). So eventually only those additional changes will have to be handled during business downtime. Because all else has already been migrated / converted during business up-time. This radically reduces the duration of the business downtime.

SAP NZDT is a hell of a machine. Kudos to those SAP guys who had the idea and made it work. Setting up and using NZDT requires quite some effort and SAP will not let you do it on your own (they will always send you their experts to do the job — and these guys are the opposite of cheap). But if you have a multi-terabyte system in front of you and every hour of business downtime causes terrifying costs for the business, it might still be the best option.

Does using NZDT for your data migration / data conversion immediately solve the problem?

Spoiler alert, the answer is: maybe not.

It should be clear: if you throw tons of monnnnay at SAP to speed up your migration or conversion, it is not the smartest move to do data validation with an old fashioned manual process. Because not only the data migration / data conversion contributes to the business downtime, also data validation must be done in the business downtime and every minute counts here as well.

Using NZDT for the data migration / conversion usually addresses the biggest single contributor to the business downtime. But there are also other activities that significantly contribute to the business downtime.

So, you take the right decision to do it with Capgemini and their automated and well-structured methodology for data validation. Including a decent software that will let the system’s CPUs glow instead of your business guy’s brains.

Because you remember correctly:

  • Whatever you can automate you should not do manually if time is precious. Automated tasks are faster.
  • Whatever you automate you can repeat over and over without risking human errors. Automated tasks are more reliable.

That’s a no-brainer and I hope we can all agree on that. (Well, maybe you don’t choose Capgemini for whatever reason but anyways pleeeease don’t use a super sophisticated and super expensive NZDT methodology to reduce the conversion downtime to a minimum while wasting your valuable time with manual data validation.)

But what if this still is not going to make the timelines work? What if the time needed for data validation is still just too much? Usually most of the tasks in data validation are in the area of minutes of runtime, so you would not need to pay any special attention to those. But there are always a few tasks that take so much more time. Because they dig through huge and complicated data models, because they have to access cluster data in a way that’s not supported by any index, because… reasons.

Don’t do a “design-to-runtime”

Well, there’s good news. You don’t have to strip down your data validation scope and enter a “design-to-runtime” discussion where you will be granted x amount of time and you need to fit in whatever is possible — leaving the rest out.

It is possible to use the changes tracked by NZDT also for data validation. This allows you to run the following sequence:

  1. Initial load: data migration / conversion during regular up-time
  2. Data validation of the initial load during regular up-time
  3. Business downtime starts
  4. Delta load: downtime data migration / conversion of the delta which has been recorded during up-time.
  5. Delta data validation: validate only what has changed during uptime — the same delta that has been recorded for the data migration / conversion can be used to identify which data has been changed during uptime and therefore is relevant to be processed in the delta validation
Only steps #4 and #5 are now in the business downtime

It should not be a surprise that step #2 will take a lot of time if there’s a lot of data (the same is true for step #1) but what is important: step #5 will be so much faster.

Truth be told, using NZDT data for data validation doesn’t come for free. Setting this up requires additional effort — whether it’s high or low depends on the data validation scope. But if you are using NZDT already it means you feel generous anyway and then it should be an easy decision. If it helps to convince you: the additional costs per saved minute are lower compared to the implementation and usage of NZDT for the data migration / conversion.

What about the other stuff?

The diagrams above show that there is also other tasks that contribute to the business downtime. Usually that consists of infrastructure related topics, system ramp-down and ramp-up, other things I have witnessed in projects but never fully understood. What improvement potential exists there, what technologies could be used, what’s the latest of the greatest: I don’t know. Maybe you should ask an infrastructure expert. I’m sure there are some at Capgemini, if you need me to find one, I’ll do my best ;-)

Sign up to discover human stories that deepen your understanding of the world.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Henrik Saterdag
Henrik Saterdag

Written by Henrik Saterdag

data guy, tech guy, married to ABAP (having an affair with HANA); 2000–2019: working at SAP, since 2020 working at Capgemini

No responses yet

Write a response