Incremental Database Synchronisation

48

The advent of heterogeneous information systems has necessitated the synchronization of multiple applications and business processes with databases.As such the most challenging task for businesses is to achieve a real-time update of the modified data so that both the source and the target databases can be use constantly.

A traditional approach to synchronisationof database is based on a comprehensive scanning of the original as well as the target databases in order to find the data that is being modified or changed. It is only after performing this that essential actions are implemented over this data: –

  • Insertall rows from data source that were missing in the destination
  • Delete from target database all rows that were missing in the source
  • Update or upgrade all rows

One big problem of this syncing method is that it is extremely long. For example, if there are two databases –SQL Server & MySQL, both having over 1 million rows. Now, assuming that the average performance of traditional synchronisation is 100 rows/ sec.It is not possible to further improveby simply deploying high speed database access components or even by the techniques as this method is incapable of bulk manipulations over data.Eventually, every pass of traditional synchronisationneeds over 2 ½ hours for a considered databases that makes it totally impossible to run on an hourly basis.

There is a different approach to solve this problem.You can do it by iterating only the rows that have already been modified since the earlier run of synchronisation. It is possible to execute based on the following algorithm: –

  • In the first run,the program performs traditional database synchronisation as mentioned above
  • Additionally, the program generates atrigger on insert, delete and even for the updates ofall tablesthat’s being synchronised. Every triggers write informationabout the modified row into a special service table
  • And then from the second run, the program retrieves information about modified rows directly from service table and updates only those rows

This method is known as incremental/trigger-based database synchronisation. Now, let’s see how howincremental process could reduce database synchronisationtime. Even with a record-by-record method of data processing, incremental method could be executed in a near to real time mode if it is scheduled properly. However, it’s also possible to execute bulk rows synchronization processing methods because every modified data that has accumulated in one place is properly grouped by the type of operation – insert, delete &update. The bulk rows method of data processing yields over 10 times higher than the synchronisation process.

Incremental/trigger-based database synchronisationmethod simply applies the following requirements: –

  • Accesstosynchroniseddatabaseswhich have sufficient privilegesfor creating triggersandtables
  • Every synchronisedtablesmust haveproperly defined primarykey

In order to evaluate the overall advantages of incremental synchronization,you may use Microsoft SQL – MySQL Sync tool.

You might also like More from author

Leave A Reply

Your email address will not be published.