Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 50 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
50
Dung lượng
477,93 KB
Nội dung
Tip: If you don’t want a particular type of operation to be applied to the con
-
solidated database, just leave that script out. For example, if you want inserts
and updates to be applied but not deletes, leave out the upload_delete script.
The deletes will still be uploaded but they will be ignored by the MobiLink server.
7.6.4.4 Handling Upload Conflicts
In general, an upload conflict is anything that causes a resolve_conflict event to
occur for a single uploaded row. This definition is vague for a reason: An
upload conflict isn’t just a problem to be dealt with; it is a powerful program
-
ming tool. Upload conflicts come in two flavors: natural conflicts and forced
conflicts, and a forced conflict can be anything you want it to be. This section
will discuss natural conflicts first, then forced conflicts.
A natural conflict is caused by the same row being updated on different
remote databases and then uploaded to the consolidated database. It can also
occur if the same row is updated on the consolidated database and on a remote
database, and then that row is uploaded from the remote database to the
consolidated.
Some applications don’t have conflicts; the databases are set up so it’s
impossible for the same row to be updated on more than one database. Other
applications don’t care; the default action of “last uploaded update wins” is
okay. But many applications have special business-related rules that must be
followed when a conflict occurs. For these applications, the conflicts must first
be detected and then dealt with, and each of those actions require more
MobiLink scripts to be written.
Every uploaded update consists of two copies of the row: the old column
values as they existed on the remote database before the row was updated, and
the new column values that the upload_update script would normally apply to
the consolidated database. A natural conflict is detected by comparing the old
values being uploaded, not with the new values, but with the values as they cur
-
rently exist on the consolidated database. If they are the same, there is no
conflict, and the upload_update script proceeds to apply the new values.
If the uploaded old remote values are different from the current consoli
-
dated values, a natural conflict exists, and it can be detected in one of two ways.
First, if you write an upload_fetch script for the table with the conflicts,
MobiLink will use that script to do the conflict check on each uploaded update.
If no conflict is detected, the row will be passed over to the upload_update
script for processing. When a conflict is detected the upload_update event is not
fired; what happens instead is discussed a bit later, but right now this discussion
is concentrating on how conflicts are detected.
The upload_fetch script should be a SELECT that specifies all the columns
in the select list and a WHERE clause that lists all the primary key columns. As
with other MobiLink scripts, it names tables and columns that exist on the con
-
solidated database but the column order must match the CREATE TABLE
column order on the remote database.
<typical_upload_fetch> ::= SELECT { <column_name> "," }
<column_name>
FROM <current_values_table_name>
236 Chapter 7: Synchronizing
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
WHERE <primary_key_column_name> "= ?"
{ AND <primary_key_column_name> "= ?" }
<current_values_table_name> ::= the target table on the consolidated database
The following is an example of an upload_fetch script; it’s up to you to write
the SELECT to tell MobiLink how to retrieve the current column values from
the consolidated database, and it’s up to the MobiLink server to actually execute
the SELECT and then compare the values with the old column values uploaded
from the remote database.
CALL ml_add_table_script ( '1', 't2', 'upload_fetch', '
SELECT key_1,
key_2,
non_key_1,
non_key_2
FROM t2
WHERE key_1 = ?
AND key_2 = ?' );
There is an alternative to the upload_fetch script: If the upload_update script
includes all the non-key columns in the WHERE clause as well as the primary
key columns, this extended upload_update script is used by MobiLink to detect
a conflict. If a conflict is detected, the extended upload_update script will not
actually apply the update. If no conflict is detected, the extended upload_update
will proceed as it normally does.
<typical_extended_upload_update> ::= UPDATE <consolidated_table_name>
SET { <non_primary_key_column_name> "= ?," }
<non_primary_key_column_name> "= ?"
WHERE <primary_key_column_name> "= ?"
{ AND <primary_key_column_name> "= ?" }
AND <non_primary_key_column_name> "= ?"
{ AND <non_primary_key_column_name> "= ?" }
Here is an example of an extended upload_update that can detect a natural con-
flict just like the earlier upload_fetch; the primary key columns come first in the
WHERE clause, then the non-key columns:
CALL ml_add_table_script ( '1', 't2', 'upload_update', '
UPDATE t2
SET non_key_1 = ?,
non_key_2 = ?
WHERE key_1 = ?
AND key_2 = ?
AND non_key_1 = ?
AND non_key_2 = ?' );
If you write both upload_fetch and extended upload_update scripts, it doesn’t
hurt, but it’s a waste of your effort to code the longer WHERE clause in the
upload_update; it will be the upload_fetch that detects the conflicts.
Note: The same extended WHERE clause is available for the upload_delete
script as well, where predicates involving all the non-key columns can be
appended.
Detecting a conflict is just the first part of the process. Actually doing some
-
thing about it requires three more scripts: upload_new_row_insert, upload_old_
row_insert, and resolve_conflict. The first two scripts allow you to store the old
Chapter 7: Synchronizing
237
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
and new uploaded values, usually in temporary tables. The resolve_conflict
script is where you put the code that deals with the conflict.
<typical_upload_old_row_insert> ::= INSERT <old_values_table_name>
"(" { <column_name> "," }
<column_name> ")"
VALUES "(" { "?," } "?" ")"
<old_values_table_name> ::= a temporary table to hold the uploaded before-images
<typical_upload_new_row_insert> ::= INSERT <new_values_table_name>
"(" { <column_name> "," }
<column_name> ")"
VALUES "(" { "?," } "?" ")"
<new_values_table_name> ::= a temporary table to hold the uploaded after-images
The upload_old_row_insert event is fired once for each conflict, and it is passed
the old value of each column in the uploaded update row. Similarly, the
upload_new_row_insert is passed the new column values. The resolve_conflict
script is then fired, and if you have saved the old and new values, you now have
access to all three versions of the row: old, new, and current.
The following example implements a business rule that requires multiple
conflicting updates to be merged by accumulating both changes and applying
the result to the consolidated database. The upload_old_row_insert script inserts
a row into the t2_old temporary table, the upload_new_row_insert script inserts
a row into t2_new, and the resolve_conflict script joins all three tables to calcu-
late the final values of the non_key_1 and non_key_2 columns. A stored
procedure is used to keep the script short.
CALL ml_add_table_script ( '1', 't2', 'upload_old_row_insert', '
INSERT t2_old
( key_1,
key_2,
non_key_1,
non_key_2 )
VALUES ( ?, ?, ?, ? )' );
CALL ml_add_table_script ( '1', 't2', 'upload_new_row_insert', '
INSERT t2_new
( key_1,
key_2,
non_key_1,
non_key_2 )
VALUES ( ?, ?, ?, ? )' );
CALL ml_add_table_script ( '1', 't2', 'resolve_conflict',
'CALL ml_resolve_conflict_t2 ( ?, ? )' );
CREATE PROCEDURE ml_resolve_conflict_t2 (
IN @ml_username VARCHAR ( 128 ),
IN @table_name VARCHAR ( 128 ) )
BEGIN
UPDATE t2
SET t2.non_key_1 = t2.non_key_1 - t2_old.non_key_1 + t2_new.non_key_1,
t2.non_key_2 = t2.non_key_2 - t2_old.non_key_2 + t2_new.non_key_2
FROM t2
JOIN t2_old
ON t2.key_1 = t2_old.key_1
AND t2.key_2 = t2_old.key_2
JOIN t2_new
ON t2.key_1 = t2_new.key_1
238 Chapter 7: Synchronizing
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
AND t2.key_2 = t2_new.key_2;
DELETE t2_new;
DELETE t2_old;
END;
Tip: Don’t forget to delete the rows from the temporary tables when they are
no longer needed so they won’t get processed over and over again as later con
-
flicts are handled.
Tip:
You can put the conflict resolution logic for several different tables into a
single procedure if you want. The table name is passed to the resolve_conflict
event as one of the parameters so your code can decide which action to take.
Note: If an ordinary upload_update script exists but there is no upload_fetch
script, a conflict will not be detected and the upload_update will be executed.
This is the “last uploaded update wins” scenario. If an upload_fetch script does
exist together with an ordinary upload_update script but there are no conflict res
-
olution scripts, an uploaded update that is in conflict will be ignored. This is the
“first update wins” scenario, where the update could have come from a prior
upload or it could have been made directly to the consolidated database.
The entire process of natural conflict detection and resolution can be merged
into a single stored procedure called from an extended upload_update script.
The following example shows an extended upload_update script and a proce-
dure ml_upload_update_t2 that replace all the scripts in the previous example;
i.e., the following code replaces the previous upload_update, upload_old_row_
insert, upload_new_row_insert, and resolve_update scripts and the ml_resolve_
conflict_t2 procedure. One “?” parameter value is passed from the extended
upload_update script to the procedure for each new non-key value, each primary
key column, and each old non-key value:
CALL ml_add_table_script ( '1', 't2', 'upload_update', '
CALL ml_upload_update_t2 ( ?, ?, ?, ?, ?, ? )' );
CREATE PROCEDURE ml_upload_update_t2 (
IN @non_key_1 INTEGER,
IN @non_key_2 INTEGER,
IN @key_1 UNSIGNED BIGINT,
IN @key_2 INTEGER,
IN @old_non_key_1 INTEGER,
IN @old_non_key_2 INTEGER )
BEGIN
UPDATE t2
SET t2.non_key_1 = t2.non_key_1 - @old_non_key_1 + @non_key_1,
t2.non_key_2 = t2.non_key_2 - @old_non_key_2 + @non_key_2
WHERE t2.key_1 = @key_1
AND t2.key_2 = @key_2;
END;
A forced conflict occurs when three conditions are satisfied: First, an uploaded
insert, delete, or update is received. Second, there are no upload_insert,
upload_delete, upload_update, or upload_fetch scripts for that table. Finally,
upload_old_row_insert and upload_new_row_insert scripts do exist; a
resolve_conflict script may also exist but it is optional.
When a forced conflict occurs for an uploaded insert, the upload_new_
row_insert event will receive the new row from the remote database. The
Chapter 7: Synchronizing
239
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
resolve_conflict script is then executed, but not the upload_old_row_insert
event. If your scripts insert rows into temporary tables as in the previous exam
-
ple, the resolve_conflict script will be able to determine it was fired by an
uploaded insert because t2_new contains one row while t2_old is empty.
When a forced conflict occurs for an uploaded delete, the upload_old_
row_insert event will receive the entire deleted row from the remote database.
The resolve_conflict script is then executed, but not the upload_new_row_insert
event. When the resolve_conflict script is executed there will be one row in
t2_old but t2_new will be empty.
When a forced conflict occurs for an uploaded update, both of the upload_
old_row_insert and upload_new_row_insert events will be fired, and when the
resolve_conflict script is executed there will be one row in t2_old and one row
in t2_new.
You can use these three events to solve complex synchronization problems,
such as dealing with differences in database design between the consolidated
and remote databases. Rows from different tables can be combined into one and
vice versa: Changes made to one table can be spread across multiple tables.
Actions performed on the remote database can be altered when they reach the
consolidated one; for example, updates and deletes can be changed into inserts
to record everything as a detailed audit trail. This kind of logic is possible
because all three sets of data are available when a forced conflict occurs: the old
and new rows from the remote database and the current row on the consolidated
database.
7.6.4.5 Handling Upload Errors
An upload error is different from a conflict in two ways: There is no built-in
mechanism to silently handle an error, and the default action is to roll back the
upload and stop the synchronization session. Changing this behavior isn’t easy,
and that’s why it’s important to prevent errors from occurring in the first place.
The most common upload error is a coding mistake in a synchronization
script. These are usually easy to repair, and because the whole upload was rolled
back you can just fix the script on the consolidated database and run the syn
-
chronization session over again.
Tip: Watch out for characteristic errors when modifying your database design.
A “characteristic error” is a mistake you make because of the way the software is
designed. In this case, because MobiLink requires you to write several different
scripts for the same table, it’s easy to forget one of them when the table layout
changes. For example, when adding or removing columns in a table, check
these scripts: upload_insert, upload_update, upload_fetch, upload_old_row_in
-
sert, upload_new_row_insert, and download_cursor. Also check the list of
columns in the CREATE SYNCHRONIZATION PUBLICATION statement. If you
are modifying the primary key definition, also check the upload_update,
upload_delete, and download_delete_cursor scripts, as well as the shadow table
and delete trigger. Shadow tables are discussed in Section 7.6.4.7, “Down
-
loading Deletes.”
240 Chapter 7: Synchronizing
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Tip: Always test synchronization after even the simplest schema change. Con
-
struct a pair of test databases and a set of simple test cases that exercise all of
the MobiLink scripts, plus a “read me” file describing how to run the test and
check the results. Do not rely on user-oriented regression testing to exercise all
the scripts or to catch subtle problems. Testing is very important with MobiLink
scripts because even basic syntax errors won’t be discovered until the scripts are
executed.
More serious upload errors involve the actual data, such as a duplicate primary
key or a referential integrity violation. In most applications the best approach is
to design the databases so these errors don’t happen. The DEFAULT GLOBAL
AUTOINCREMENT feature and GLOBAL_DATABASE_ID option can be
used to guarantee unique primary keys, for example; see Section 1.8.2 for more
information.
Referential integrity violations won’t happen if the same foreign key rela
-
tionships exist on both the remote and consolidated databases and you
remember to include all the necessary tables in the CREATE PUBLICATION
statement. Schema differences require more work on your part, perhaps involv
-
ing the TableOrder extended option described in Section 7.4.1, “CREATE
PUBLICATION,” or forced conflict scripts described in Section 7.6.4.4, “Han-
dling Upload Conflicts.”
When push comes to shove, however, some applications require non-stop
operations even in the face of upload errors. One approach is to skip the bad
data and carry on with the rest, which is possible with the handle_error script.
The following example shows how to skip all errors:
CALL ml_add_connection_script ( '1', 'handle_error',
'CALL ml_handle_error ( ?, ?, ?, ?, ? )' );
CREATE PROCEDURE ml_handle_error (
INOUT @action_code INTEGER,
IN @error_code INTEGER,
IN @error_message LONG VARCHAR,
IN @ml_username VARCHAR ( 128 ),
IN @table VARCHAR ( 128 ) )
BEGIN
SET @action_code = 1000; skip row and continue
END;
You can easily write a more sophisticated handle_error script to take different
actions depending on which errors occur and which tables are involved. The
action code parameter defaults to 3000, which means roll back the upload and
stop the synchronization session. This is also the default action when no han
-
dle_error script is present. Other values include 1000, shown above, to skip the
uploaded row causing the error and carry on with the rest of the upload, and
4000 to roll back the upload and shut down the server entirely.
One way to record all the errors for later analysis is to run the MobiLink
server with the -o option to write all the error message to a text file. Another
way is to insert the error information into your own table on the consolidated
database. You can do this in two places: the handle_error script and the
report_error script. The advantage to putting your INSERT in the report_error
script is that it will run on a separate connection and will be committed immedi
-
ately, so the row will still be there if the upload is rolled back. An INSERT in
Chapter 7: Synchronizing
241
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
the handle_error script will be rolled back if the action code is set to 3000 or
4000 now or at some later point before the upload is committed.
The following is an example of a report_error script together with the table
it uses. The error_code column is defined as VARCHAR instead of INTEGER
so this table can also be used in the report_ODBC_error script that receives an
alphanumeric SQLSTATE instead of a number.
CREATE TABLE ml_error (
ml_username VARCHAR ( 128 ) NOT NULL,
inserted_at TIMESTAMP NOT NULL DEFAULT TIMESTAMP,
unique_id UNSIGNED BIGINT NOT NULL DEFAULT AUTOINCREMENT,
action_code INTEGER NOT NULL,
error_code VARCHAR ( 100 ) NOT NULL,
error_message LONG VARCHAR NOT NULL,
table_name VARCHAR ( 128 ) NOT NULL,
PRIMARY KEY ( ml_username, inserted_at, unique_id ) );
CALL ml_add_connection_script ( '1', 'report_error',
'CALL ml_report_error ( ?, ?, ?, ?, ? )' );
CREATE PROCEDURE ml_report_error (
IN @action_code INTEGER,
IN @error_code INTEGER,
IN @error_message LONG VARCHAR,
IN @ml_username VARCHAR ( 128 ),
IN @table VARCHAR ( 128 ) )
BEGIN
INSERT ml_error
VALUES ( @ml_username,
DEFAULT,
DEFAULT,
@action_code,
CAST ( COALESCE ( @error_code,0)ASVARCHAR ( 100 ) ),
COALESCE ( @error_message, '' ),
COALESCE ( @table, '' ) );
END;
Here is what the ml_error row looks like after a primary key violation has been
skipped:
'1', '2003 07 28 16:55:54.710000', 8, 1000, '-193',
'ODBC: [Sybase][ODBC Driver][Adaptive Server Anywhere]Integrity
constraint violation: Primary key for table ''t1'' is not
unique (ODBC State = 23000, Native error code = -193)', 't1'
Tip: If all you want to do is record diagnostic information about the first error
encountered and then let the session roll back and stop, leave out the han
-
dle_error script and use only a report_error script like the one above.
Another way to handle upload errors is to change the basic scripts that receive
the uploaded rows. For example, you can use the ON EXISTING SKIP clause
on the INSERT statement in an upload_insert script to skip any rows that have
primary key violations. Or use ON EXISTING UPDATE to change the failing
INSERT into an UPDATE that will work. These techniques only work on a SQL
Anywhere consolidated database, of course; for Oracle and other software you
must work harder, perhaps using forced conflict scripts as described in Section
7.6.4.4, “Handling Upload Conflicts.”
242 Chapter 7: Synchronizing
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
7.6.4.6 Downloading Inserts and Updates
Unlike the upload stream, the download stream is entirely under your control as
the author of the MobiLink scripts. Downloaded deletes are discussed in the
next section; this section describes how to construct the insert and update por
-
tion of the download stream.
For each table to be downloaded, you must write a download_cursor script
that selects all the rows from the consolidated database that must be inserted or
updated on the remote database. You don’t have to worry about which rows
need to be inserted and which ones updated; that’s all taken care of by dbmlsync
when it receives the download stream. Here’s how that works: If the primary
key of a downloaded row matches the primary key of a row that already exists
on the remote database, dbmlsync treats it as a downloaded update. If the pri
-
mary key doesn’t match any row on the remote database, it’s processed as an
insert. This is sometimes called an “upsert” for “update-or-insert as required.”
Tip: Don’t ever update the primary key value of any row involved in MobiLink
synchronization, and don’t delete and immediately re-insert a row with the same
primary key value. MobiLink depends on the primary key values to determine
which rows are being inserted, updated, and deleted. If your application requires
key values to change, make that key a separate UNIQUE constraint on the table,
and add a DEFAULT GLOBAL AUTOINCREMENT column as the PRIMARY KEY. A
row can only be tracked reliably in a distributed database environment if it has a
primary key that never changes; otherwise there is chaos.
The simplest download_cursor script is “SELECT * FROM t,” which sends all
the columns and rows down to the remote. New rows are automatically inserted
by dbmlsync, old rows are updated, and in effect a “snapshot” of the entire con-
solidated table is downloaded. This is often called “snapshot synchronization.”
If the table is treated as read-only on the remote database, and if rows aren’t
deleted from the consolidated, snapshot synchronization works to replace the
entire contents of the table on the remote database with every synchronization.
Snapshot synchronization may work for small, rapidly changing tables, but
for large tables it generates too much network traffic. A more common tech
-
nique is to download only those rows that have been inserted or updated on the
consolidated database since the last download. If you put a TIMESTAMP
DEFAULT TIMESTAMP column in your consolidated table, you can make use
of the last_download parameter passed to the download_cursor script as the first
“?” placeholder. This is called a “timestamp download”:
<typical_download_cursor> ::= SELECT { <column_name> "," }
<column_name>
FROM <consolidated_table_name>
WHERE <when_updated_column_name> "> ?"
<when_updated_column_name> ::= a TIMESTAMP column with DEFAULT TIMESTAMP
The following is an example of a simple table and the corresponding time
-
stamp-based download_cursor script. Every time a row is inserted into t1, or
updated, the last_updated column gets set to CURRENT TIMESTAMP by the
special DEFAULT TIMESTAMP feature. This column only appears in the
WHERE clause, not the SELECT list; it is not included on the remote database
Chapter 7: Synchronizing
243
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
because it isn’t needed there. The only reason last_updated exists on the consol
-
idated database is to control the download_cursor script.
CREATE TABLE t1 (
key_1 UNSIGNED BIGINT NOT NULL DEFAULT GLOBAL AUTOINCREMENT ( 1000000 ),
key_2 INTEGER NOT NULL DEFAULT 0,
non_key_1 VARCHAR ( 100 ) NOT NULL DEFAULT '',
non_key_2 VARCHAR ( 100 ) NOT NULL DEFAULT '',
last_updated TIMESTAMP NOT NULL DEFAULT TIMESTAMP,
PRIMARY KEY ( key_1, key_2 ) );
CALL ml_add_table_script ( '1', 't1', 'download_cursor', '
SELECT key_1,
key_2,
non_key_1,
non_key_2
FROM t1
WHERE last_updated > ?' );
Note: The initial value for the last_download parameter is 1900-01-01.
You can join data from different tables in a download_cursor script, you can
select rows based on complex WHERE clauses, you can do just about anything
that’s required to build the desired result set to be applied to the named table in
the remote database. The only rule you must follow is that the same number of
columns must appear in the SELECT list as in the CREATE PUBLICATION for
that table, with the same or compatible data types in the same order as they exist
in the CREATE TABLE on the remote database. In many cases that’s easy
because the tables look the same on both databases and all the columns are
being synchronized.
In some applications, however, the schema is different, and/or different sets
of rows must be downloaded to different remote databases. MobiLink provides
some assistance for these special cases by providing the MobiLink user name
for the current synchronization session as the second parameter to the down
-
load_cursor script. You can partition the data for different remote databases by
storing the MobiLink user name in a database column and referring to this
parameter as the second “?” placeholder in the WHERE clause.
Tip: You can call a stored procedure from a download_cursor script, as long
as that procedure returns a single result set that meets the download require
-
ments of the table on the remote database.
Here is a short but intricate example that demonstrates some of the freedom you
have when writing a download_cursor script:
CALL ml_add_table_script ( '1', 'tr4', 'download_cursor', '
SELECT tc3.key_3,
tc2.non_key_1,
tc3.non_key_1
FROM tc1
JOIN tc2 ON tc1.key_1 = tc2.key_1
JOIN tc3 ON tc2.key_1 = tc3.key_1 AND tc2.key_2 = tc3.key_2
WHERE tc3.last_update > ? last_download
AND tc1.db_id = CAST(?ASBIGINT ) ML_username' );
244 Chapter 7: Synchronizing
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Here’s how the example works:
1. The script is for downloading data to a table named tr4 on the remote data
-
base. There is no table with that name on the consolidated database, but
that doesn’t matter as long as the script builds a result set that matches tr4.
2. The SELECT joins three tables on the consolidated database, tc1, tc2, and
tc3, all of which have different names and schemas from the remote table
tr4. MobiLink scripts have no access to the remote database; they can only
refer to tables on the consolidated database. Here is what the three tables
on the consolidated database look like:
CREATE TABLE tc1 ( on the consolidated database
key_1 BIGINT NOT NULL,
db_id BIGINT NOT NULL,
PRIMARY KEY ( key_1 ) );
CREATE TABLE tc2 ( on the consolidated database
key_1 BIGINT NOT NULL,
key_2 BIGINT NOT NULL,
non_key_1 BIGINT NOT NULL,
PRIMARY KEY ( key_1, key_2 ),
FOREIGN KEY ( key_1 ) REFERENCES tc1 );
CREATE TABLE tc3 ( on the consolidated database
key_1 BIGINT NOT NULL,
key_2 BIGINT NOT NULL,
key_3 BIGINT NOT NULL UNIQUE,
non_key_1 BIGINT NOT NULL,
last_update TIMESTAMP NOT NULL DEFAULT TIMESTAMP,
PRIMARY KEY ( key_1, key_2, key_3 ),
FOREIGN KEY ( key_1, key_2 ) REFERENCES tc2 );
3. The SELECT list picks three columns from tc2 and tc3 in the order that
matches the requirements of tr4. This is a critical point: The CREATE
PUBLICATION names the columns in tr4 that are to be synchronized, the
CREATE TABLE for tr4 specifies the column order, and the down
-
load_cursor SELECT must agree. Here is what the table and publication
look like on the remote database:
CREATE TABLE tr4 ( on the remote database
key_1 BIGINT NOT NULL,
non_key_1 BIGINT NOT NULL,
non_key_2 BIGINT NOT NULL,
PRIMARY KEY ( key_1 ) );
CREATE PUBLICATION p1 (
TABLE tr4 ( key_1,
non_key_1,
non_key_2 ) );
4. The FROM clause in the download_cursor script joins tr1, tr2, and tr3
according to their foreign key relationships. This is an example of
denormalization: The download_cursor is flattening the multi-level hierar
-
chy on the consolidated database into a single table on the remote database.
5. The WHERE clause implements the timestamp download technique as dis
-
cussed earlier: tc3.last_update > ?.
6. The WHERE clause also uses a second “?” placeholder to limit the result
set to rows that match on the MobiLink user name: tc1.db_id = CAST ( ?
Chapter 7: Synchronizing
245
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
[...]... displays “SQLSTATE = 22012”: BEGIN DECLARE @number INTEGER; DECLARE @date DATE; DECLARE @sqlstate_53018 EXCEPTION FOR SQLSTATE '53018'; SET @number = 1 / 0; SET @date = 'xxx'; EXCEPTION WHEN @sqlstate_53018 THEN MESSAGE STRING ( 'Data conversion error!' ) TO CLIENT; Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark Chapter 8: Packaging 265 WHEN OTHERS THEN MESSAGE STRING ( 'SQLSTATE... for that DECLARE statement, and for the exception handler: ::= DECLARE EXCEPTION FOR SQLSTATE ::= to use in an ::= string literal containing SQLSTATE error value ::= EXCEPTION ::= { ... is an example of a PowerBuilder script that sends a BEGIN block to the database server for execution: string ls _sql ls _sql & = "BEGIN " & + " DECLARE @x VARCHAR ( 100 ); " & + " SET @x = 'Hello, World'; " & + " MESSAGE STRING ( @x ) TO CONSOLE; " & + "END;" EXECUTE IMMEDIATE :ls _sql USING SQLCA; Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark 264 8.3 Chapter 8: Packaging... captures both the SQLSTATE and ERRORMSG() values as soon as the WHEN clause is executed A simple SELECT statement with an INTO clause may be used for this purpose Here is a BEGIN block showing how it’s done: BEGIN DECLARE @date DATE; DECLARE @sqlstate VARCHAR ( 5 ); DECLARE @errormsg VARCHAR ( 32767 ); SET @date = 'xxx'; EXCEPTION WHEN OTHERS THEN SELECT SQLSTATE, ERRORMSG() INTO @sqlstate, @errormsg;... SQLSTATE, ERRORMSG() INTO @sqlstate, @errormsg; MESSAGE STRING ( 'SQLSTATE = ', @sqlstate ) TO CLIENT; MESSAGE STRING ( 'ERRORMSG() = ', @errormsg ) TO CLIENT; END; When that BEGIN block is executed it displays the following information about the exception: SQLSTATE = 53018 ERRORMSG() = Cannot convert xxx to a timestamp Not all of the 576 non-zero SQLSTATE values will be detected by an exception handler; Table... add code that explicitly checks for these SQLSTATE values immediately after the executable statements that might raise them An example of that kind of check is shown in Section 6.2.4, “OPEN and CLOSE Cursor.” Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark 266 Chapter 8: Packaging Table 8-1 Warning conditions by SQLSTATE SQLSTATE SQLCODE Warning Message 01000 200 Warning... sp_hook_dbmlsync_download_fatal _sql_ error, and it gets called when a SQL statement fails during the download stage Like the report_error event on the consolidated database, this hook procedure is executed on a separate connection That means any changes made to the remote database in this procedure are committed even if the whole download stream is going to be rolled back When sp_hook_dbmlsync_download_fatal _sql_ error is... the actual parameter value and "name" containing the parameter name The parameter names are all documented in the SQLAnywhere 9 Help file, and you can retrieve the corresponding values via singleton SELECT statements For example, the expression ( SELECT value FROM #hook_dict WHERE name = 'SQL error code' ) will return a single string value containing an error code like '-196' If five parameters are... { "," } Here is a very simple example that displays “Data conversion error!” when you run it: BEGIN DECLARE @date DATE; DECLARE @sqlstate_53018 EXCEPTION FOR SQLSTATE '53018'; SET @date = 'xxx'; EXCEPTION WHEN @sqlstate_53018 THEN MESSAGE STRING ( 'Data conversion error!' ) TO CLIENT; END; The keyword OTHERS may be used instead of a list of named exception conditions to... hook procedure, sp_hook_dbmlsync_download _sql_ error, which allows you to fix errors and continue processing This is not recommended because an error affecting one row may cause dbmlsync to skip all the rows for that table The default action, when you don’t write an sp_hook_ dbmlsync_download _sql_ error procedure at all, is to call sp_hook_dbmlsync_ download_fatal _sql_ error if it exists and then roll back . failing
INSERT into an UPDATE that will work. These techniques only work on a SQL
Anywhere consolidated database, of course; for Oracle and other software you
must. of
deletes with a special command: STOP SYNCHRONIZATION DELETE. This
tells SQL Anywhere 9 that from now on, for the current connection, any deletes
recorded