Customer Portal

In files with generic names, only one at a time

Comments 14

  • Avatar
    avackova
    0
    Comment actions Permalink
    Hello Adrien,
    have you tried to create the File Event Listener? It starts the graph every time, the new file appears. In 2.9 version it doesn't work very well with wild-cards, but maybe it would be enough for you to observe only input directory. The another problem may be with large files, when the listener starts the graph before the whole data file is uploaded.
  • Avatar
    acominotto
    0
    Comment actions Permalink
    Hello Agata,

    Thanks for the quick response.

    How will the temporary files be handled if I do this this way?

    Thanks in advance.

    PS : for the File Event Listener, I already have a trigger file trick (I create the trigger file only if my data is fully updated).
  • Avatar
    avackova
    0
    Comment actions Permalink
    Hello Adrien,
    where do you use temporary files? Can't you really specify the file names or location?
  • Avatar
    acominotto
    0
    Comment actions Permalink
    My graphs are divided in 3 main parts: reading - processing - writing (they are pretty big).

    So I use temporary files between all those parts (reading-> write to temp files; read temp files-> processing -> write to temp files; read temp files -> writing)

    The path where they are written is static and define by the parameter file.

    Is there another way to define the location of the temporary files?

    Thanks in advance.
  • Avatar
    avackova
    0
    Comment actions Permalink
    What about using event_file_name parameter? Something like fileURL="${DATATMP_DIR}/${event_file_name}.tmp1" in your Writers and Readers?
  • Avatar
    acominotto
    0
    Comment actions Permalink
    Hi Agata,

    This sounds great!

    If I use an empty trigger file to know when the data is fully transfered to the server clover (eg. xxx.trg), do I have to name our zip file xxx.trg.zip to retrieve it in the graph, because the only information I will have is the name of the file that the event listener is waiting for.

    The same for the output file, can I have a processing on the ouput file name or do I have to name it xxx.trg.zip because xxx.trg will be the only 'dynamic' part of my graph occurrence.

    Again another question about this : we will be using large XML files, if we use the graphs in parallel like this, is there a chance to cause a memory dump due to the heap size of the JVM? Or do clover handle it well?

    Thanks in advance and thanks again for the quick answers.
  • Avatar
    avackova
    0
    Comment actions Permalink
    Hello Adrien,
    I'm not sure if I understand you well, but I see the scenario as follows:
    • fileURL in the Reader is with wildcards, let say: zip:(*.zip)#*.xml

    • trigger file has an unique name, something like xxx.trg where xxx is a random number

    • temporary file's names depend on the trigger file name, eg. fileURL="${DATATMP_DIR}/${event_file_name}.phase0.tmp"

    • the problem can be with output file name - here I see 2 possibilities:
      [list:mld97jba][*:mld97jba] use trigger file name as a matrix

    • use input file name (may be complicated): [list:mld97jba][*:mld97jba]add aoutofilling field (source_name) to your input metadata

    • put input file name to the dictionary during the transformation

    • add phase after the whole processing, that renames the output file name according to the name in dictionary
    [/*:m:mld97jba][/list:u:mld97jba][/*:m:mld97jba][/list:u:mld97jba]

    Regarding the question about the xml files and memory it depends on component you use for reading: XPathReader reads the whole file into memory, so when you try to process more files at once it can cause the OOM error; XMLExtract reads data sequentially, so different XMLExtracts can work parallel with no fear of OOM error
  • Avatar
    acominotto
    0
    Comment actions Permalink
    Ok, I'll go with that!

    Your scenario was pretty much it.

    In fact I will use this temporary solution while waiting for the version 3.1 and then I will use another graph that will run my graphs, in the same JVM, with parameters that will allow me to have more simpler way of naming.

    Thank you very much for the quick answers!

    PS : thank you also about the description of the XMLExtract, this is very helpful!

    Adrien
  • Avatar
    hneff1
    0
    Comment actions Permalink
    I am not totally clear on how you are making sure the entire file has been transferred tot he ftp server prior to kicking off the job flow with the event listener based on file appearance. You mention something about using a "trigger" file. How do you do this?

    Thanks,
    Heather
  • Avatar
    slechtaj
    0
    Comment actions Permalink
    Hi Heather,

    The idea behind the trigger file is that the actual event is triggered not by the large file, but the tiny one. The tine file however, must be always created after the large file is fully loaded.

    It's just fine to add another (next) phase into a graph that loads data to FTP and in this phase create just the tiny file. This way you can be sure the large file is already loaded as it has to finish before the second file is created.
  • Avatar
    hneff1
    0
    Comment actions Permalink
    Thanks for the reply Jan. I am still not totally clear. Hope I am not being dense :?

    My scenario is that a customer ftp file to FTP server and file appears on FTP server. Once file is completely copied to FTP server, I want to use file event listener to kickoff job flow to process file.

    You mentioned using the tiny trigger file as the event that kicks off the job flow. What causes the trigger file to be created and how is it created?

    Thanks,
    Heather
  • Avatar
    slechtaj
    0
    Comment actions Permalink
    Hi Heather,

    That is actually the thing. The tiny file has to be uploaded by the same party right after the big file is uploaded.
    Let me give you an example: Using CloverETL we produce a large file that is written directly to a remote location. On the given location we expect to receive this file, but since we don't know when the file is completely uploaded we create another (tiny) file within the same graph (but in a later phase in order to make sure the writer in the earlier phases has already finished) - and set up an event listener waiting for appearance of the tiny file. When it appears, the large file is processed.
    As you can see in this example, the tiny file is created by the Clover using the same graph as the large file. It of course does not have to be created by the Clover, but you need to make sure the application starts creating the tiny file after (not earlier) the large file is fully copied to the remote location.

    Let me know if anything is unclear.
  • Avatar
    hneff1
    0
    Comment actions Permalink
    Hi Jan. The use case is as follows: on the Clover server we want to have a daily scheduled job that will first call a jobflow which executes a graph to call a webservice that generates a file, checks for a complete export file in a client's outbound SFTP directory, and then ftps it to them once it knows the file is complete. The process that is generating the file is a webservice that is being called by Clover. The webservice is asyncronous so it returns right away before the processing even starts so it has no way to know the file is complete to generate the tiny trigger file. Is there another way to accomplish knowing the file is complete prior to starting the next step in the jobflow?
  • Avatar
    Lukas Cholasta
    0
    Comment actions Permalink
    Hi,

    I was solving this use case with Heather via email but in case anyone else is interested in a solution, here it is.

    In case you don't control the transfer process, you need to setup a jobflow that will periodically check the size of the file and compare it to the size the file had during the last iteration. You can do this using Loop and Sleep components. Nevertheless, this process is prone to false positive result due to possible network errors or other aspects that may significantly slow the transfer down. Therefore, it is important to set the value of the delay in the Sleep component high enough. If the size of the file doesn't change after this delay, the file is considered fully transferred. Attached is an example that should give you better idea.

    Best regards,

Please sign in to leave a comment.