Yes, the product automatically adds new objects and fields, and keeps old objects and fields that are removed from Salesforce until you specifically request that they be dropped from the database. The only non-automated process is if you change data types in Salesforce, in which the database was originally a number or a date. In that case, you would get a warning message every time a job runs on that object telling you what field didn’t match. If there was every any actual data that couldn’t be copied, the job would fail and let you know that you have to make a decision to drop and re-add the column with the correct data type, using the following command:

RJ4Salesforce -config [config name] -dropColumnForce [object name] [field name]

This is not automated because you might want to preserve or even recover the old data, since Salesforce drops and re-adds the field, leading to possible loss of data.

If you turn on History Tracking, prior versions of each record are retained in a secondary table. You can purge the history with a SQL DELETE or a TRUNCATE statement for the secondary backup tables. There are no versions of the entire dataset, since we’re using a single database schema with secondary backup History Tracking tables to keep prior images of individual records.

All data is retained forever. All versions, if you turn this feature on, are retained unless you physically delete them from the database. The prior versions are in separate tables, so you can manage space more easily.

You can create multiple databases for different purposes, and they will be updated when their own replication cycles happen under your control. But instead of versioning the entire set of data for a single org, you can keep prior versions of individual records in a secondary table for each Salesforce object. This is not Salesforce’s History object, it is our own feature. There is a primary table for each object, and a secondary table that contains each prior snapshot of that record. This approach conserves the amount of database space required to store all versions, and fits better into the incremental replication process.

Yes. This is all done through your own scheduling system or Windows Scheduler, or cron if using UNIX.

Yes. This is the preferred implementation. You can use any job scheduler, including Windows Scheduler, UNIX cron, or a commercial job scheduler.

Yes, there is a Real Time Option which uses OutBound Messages to drive the process. The procedure to set this up is to

  1. Define OutBound Messages for each object desired in Salesforce, sending the ID and SystemModStamp fields to your Relational Junction server for new or changed records.
  2. Open a port to *.salesforce.com in your firewall to allow the OutBound Messages in.
  3. Install the WAR file for the Real Time Option application.
  4. Run a -getObjectIdMap command for each object you wish to get in real time. This maps the Salesforce ID’s to objects using their 4-character prefix, using the existing data in your database for pattern matching.
  5. Run the following command to start pulling data as the record ID’s are captured, using a job scheduled every 5 minutes to ensure continuous operation in the event of failure:
    RJ4SalesforceRTO -config [config name] -repeat [delay seconds] -getRealTime
  6. To send changed data to Salesforce, add a -setGlobal command before or after the -getRealTime, with the objects desired in the upload.config file.

Yes. Relational Junction for Salesforce allows you to create separate configurations to subset the objects and to point to different databases. You specific which configuration you want to use for each job, giving you total flexibility to mix and match.

Yes, you can subset the Salesforce objects to include any specific objects. You can also limit replication to Standard, Custom, History, Chatter, Share, or Tag objects.