Hello,
i cannot run the job every 15 min. I need to run the job only once a day.
Yes, once a day will do what you want.
but My question is how can i group by the 15 minute interval from this file
By looking at the time (hh & mm) in each record, you can determine which of the 94 metrics should be acumulated. Just define an array that has 97 "buckets" (the extra is in case there would be an invalid time in some record) and add to the appropriate bucket.
Also let me know if processing 9 million records daily in the above logic will not hamper any performance.
I suspect it will take as long or longer to unload the data than read it and summarize it. Still, the process should take well less than 2 hours (assuming the data is on dasd and the process does not have to wait over and over for tape mounts).