Customer Portal

Batch process in DBOutPut Component

Comments 7

  • Avatar
    avackova
    0
    Comment actions Permalink
    See DBOutputTable - it has batchMode and batchSize attributes.
  • Avatar
    dpavlis
    0
    Comment actions Permalink
    By properly tweaking batchSize and commitSize of DBOutputTable, we have been able to get around 40k records inserted to DB per second.

    The speed also depends on what the DB has to do after the record was inserted. It may be a good idea to drop/disable any index or trigger on the target table prior loading and then re-enable it again.

    Other option is to use one of the bulk-loaders which Clover has for Oracle, DB2, Informix, MSSQL & PostgreSQL.
  • Avatar
    mishra.hanu@yahoo.co.in
    0
    Comment actions Permalink
    I am not able to optimize the system in speed wise..still it is taking lot more time to load the data..Can any body help me that how to " tweaking batchSize and commitSize " if i am having 2 million data.


    Thanks,
    Hanuman Mishra
  • Avatar
    dpavlis
    0
    Comment actions Permalink
    First of all, what DB are you using ? Not all DBs and JDBC drivers support batch mode updates/inserts (Oracle does, quite well, some others not).

    The batchSize can be easily set to 1k or 10k. The commit size can be 10 times more batchSize - e.g. if batchSize=1000 then commitSize=10000.

    That means that clover will send to DB insers/updates 1000 at a time and will call commit on DB each 10000 records. If this improves your performance, then you can try to increase the batchSize=5000 and commitSize=50000
  • Avatar
    mishra.hanu@yahoo.co.in
    0
    Comment actions Permalink
    My DB is MYSql.....so please tell me for the same..

    Thanks,
    Hanuman
  • Avatar
    avackova
    0
    Comment actions Permalink
    Hi,
    it seems, that for mysql database it is not well "optimizeable": for 4 rows table for 1000000 records I've got the best time (213 sec.) for default settings, enlarging commit size has caused growing of execution time (till 390 sec. for commit = 1000000). But for postgre database I was able to shorter execution time more then 4 times by setting greater values of batchSize and commit.
  • Avatar
    mishra.hanu@yahoo.co.in
    0
    Comment actions Permalink
    Hi,
    Can any body help me in loading 2 million data in mysql db....it is still taking too much time(around 2 hours).

    I used:--

    /*dbGroup.setUseBatch(true);
    dbGroup.setBatchSize(5000);
    dbGroup.setRecordsInCommit(50000);*/
    /*dbGroup.setUseBatch(true);
    dbGroup.setBatchSize(1000);
    dbGroup.setRecordsInommit(10000);*/

    /*dbGroup.setUseBatch(true);
    dbGroup.setBatchSize(10000);
    dbGroup.setRecordsInommit(100000);*/

    but no one is optimizing my process...

    Thanks,
    Hanuman Mishra

Please sign in to leave a comment.