Heterogenous (different source and target DBMS) Database Migrations are usually challenging due to the inherent complexity with data type conversions and limited availability of tools for data migration for both offline and online migration scenarios. This challenge is further increased if the size of database to be migrated is large for initial load and/or frequent incremental data (Change Data Capture) copy is required from source to target database. Online Database migration from SAP ASE (Sybase ASE) to Azure SQL is a challenge as ASE does not offer any CDC table feature which can assist in an online migration, so there is a need to rely on 3rd party solutions.
In this BLOG we will look at the SAP ASE Data Transfer Utility, which offers T-SQL command ‘transfer table … to’ for initial and incremental data loads. It was observed that the throughput for Transfer Table is close to 3 or 4 times that of 'SAP Sybase bcp out' command. The transfer table feature can be chosen as an alternative to 3rd party solutions.
Transfer table provides a means to extract data from tables marked eligible for incremental transfer to a flat file, selecting only the rows which have changed since previous transfer was executed on the same table. The file can be created on the ASE host or a File Share/NFS filesystem.
The extracted rows can be written to the output file in a format fit for the target system. The 'for' clause of the command can specify the formats SAP IQ, SAP ASE, bcp file, or character-coded output. The selected rows are extracted by default without encryption and if the source table has encrypted columns, then these rows are decrypted before being transmitted.
Once enabled for incremental transfer, the routine keeps track of the changed data instead of scanning every row in the table unlike replication which monitors the transaction log. The first execution would extract the complete table to the output file. A table eligible for incremental transfer will only send data which has been changed or new data inserted during the subsequent execution and has been committed at the time the transfer is initiated. Tables are marked as eligible using the option ‘set transfer table on’ to the create table or alter table command. They continue to remain in same mode until the option 'set transfer table off' is used to disable it in the alter table command.
The incremental transfer is tracked by a sequence marker which is created in the table row when the table is enabled for incremental transfer. Every row in the eligible table has a marker. The marker along with the transfer history stored in an internal table 'spt_TableTransfer' tracks the changes in the table data. Since the marker is stored, eligible tables require more space. Transfer table fetches the table data minus the usual locks and does not impact the ongoing transactions.
The Data Transfer Utility feature comes bundled with the ASE IMDB (In-Memory Databases) license. Please refer to the Sybase ASE documentation for further details on licensing.
The Transfer table command can also be utilized for tables that are not enabled for incremental transfer. However, this does not capture the incremental data. This can be used in the case of One-time migration.
The transfer table extracts the entire table, but subsequent executions do not refer to the previous iteration.
In the case of updates on the table during the transfer, these are not captured.
Monitoring and Tracking Transfers
transfer table [[db.]owner.]table [to | from] destination_file
[ for { ase | bcp | iq | csv } ]
[ with {column_separator=string}, {column_order=option},
{encryption=option}, {row_separator=string},
{resend=id}, {progress=sss}, {tracking_id=nnn}
{sync = true | false]}, {fixed_length = true | false}
, null_byte = true | false}]
create table <table_name> …. with transfer table on
or
alter table <table_name> set transfer table on
transfer table <table_name> to <destination_file> for csv with column_separator ='|'
The ‘transfer table … to’ Command, will not write output files to any directory, but to a defined output directory or its subdirectories. This directory path can be modified using the 'sp_file_path' stored procedure for the specific database. If it is not specified, the default is $SYBASE/data.
Syntax:
sp_file_path '<database_name>', 'dtu path root', '<directory_path>'
By default, ‘transfer table’ permission is allocated to the owner of that table and to System Administrators. The table owners can grant this permission to specific users. However, this grant does not give permission to decrypt data within that table. Table owners need to specifically provide decrypt permission.
Database:
ELT Platform:
Requirements:
CREATE TABLE columns_info (
NAME VARCHAR (200) NOT NULL
)
CREATE TABLE tablekey (
TABLENAME VARCHAR (200) PRIMARY KEY,
KEYCOLUMN VARCHAR (200) NOT NULL
)
create procedure proc_transfertable
as
-- declare local variables
DECLARE @tab VARCHAR(255)
DECLARE @path VARCHAR(255)
DECLARE @fileName VARCHAR(255)
DECLARE @cmd VARCHAR(255)
DECLARE @extn VARCHAR(255)
DECLARE @spconf VARCHAR(255)
DECLARE @xp VARCHAR(255)
DECLARE @spfilepath VARCHAR(255)
DECLARE @fp VARCHAR(255)
DECLARE @tton VARCHAR(255)
DECLARE @ttexec VARCHAR(255)
DECLARE @sequence_id VARCHAR(255)
DECLARE @sequenceid INT
DECLARE @movecsv VARCHAR(255)
-- setting permission to execute xp_cmdshell
SET @spconf = "sp_configure 'xp_cmdshell context', "
SET @xp = @spconf + "0"
exec (@xp)
-- retrieve the list of table names eligible for transfer
DECLARE cur_tablename cursor for
select name as table_name from sysobjects where type = 'U' and name not in ('spt_TableTransfer', 'columns_info')
-- open cursor and fetch each tablename into the cursor for the while loop
open cur_tablename
fetch next from cur_tablename into @tab
while @@fetch_status = 0
BEGIN
-- truncate the columns_info table and insert the table column names
IF OBJECT_ID('edrive..columns_info') IS NOT NULL
BEGIN
DELETE edrive..columns_info
END
insert edrive..columns_info
select syscolumns.name
from sysobjects
join syscolumns on sysobjects.id = syscolumns.id
where sysobjects.name = @tab
-- setting path for bcp out
-- executing bcp out of table columns_info if the outfile does not exist
SET @path = "D:\SybaseData\"
SET @extn = " -U sa -P abc123 -c -r ""|" + """"
SET @fileName = @path + @tab + "_" + "columns_info.csv"
SET @cmd = "master..xp_cmdshell 'IF NOT EXIST " + @fileName + " (bcp edrive..columns_info out " + @fileName + @extn + ") ELSE (echo file exists)'"
print @tab
print @cmd
exec (@cmd)
-- setting filepath for the transfer table destination
SET @spfilepath = "sp_file_path 'edrive', 'dtu path root', "
SET @fp = @spfilepath + "'" + @path + "'"
print @fp
exec (@fp)
-- enable transfer table on for the table
SET @tton = "alter table "+ @tab + " set transfer table on"
print @tton
exec (@tton)
-- fetch sequence id for the table
-- prefix the appropriate sequence id into the file name
select @sequenceid = isnull (max (sequence_id),0) + 1 from sysobjects a, spt_TableTransfer b where a.name = @tab and a.id = b.id
SET @sequence_id = CONVERT(VARCHAR(255), @sequenceid)
print "sequence_id = %1! ", @sequence_id
-- execute the transfer table command
SET @ttexec = "transfer table " + @tab + " to '" + @path + @tab + "___" + @sequence_id + ".csv'" + " for csv with column_separator ='|'"
print @ttexec
exec (@ttexec)
fetch next from cur_tablename into @tab
END
close cur_tablename
-- remove permission to execute xp_cmdshell
SET @spconf = "sp_configure 'xp_cmdshell context', "
SET @xp = @spconf + "1"
exec (@xp)
return
(Please note the mentioned timings for the test may vary based on the machine/environment configuration and workload)
Data Set:
insert into tablekey ( tablename, keycolumn )
values ('train','TRIP_ID')
insert into tablekey ( tablename, keycolumn )
values ('trip','ID')
The tests results demonstrated the following:
(Note: These results are from our internal tests. Please note the above-mentioned times may vary based on the machine/environment configuration and workload.)
If you have feedback or suggestions for improving this data migration asset, please contact the Azure Databases SQL Customer Success Engineering Team. Thanks for your support!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.