by jkusb » Fri Oct 12, 2012 12:39 am
My team runs Jobs that delete massive amounts of records… we have been told that these jobs are impacting production jobs (something like ‘your job is sucking 80% CPU’)… to which I thought… well, that shouldn’t be possible… aren’t there ‘controls’ in place so that things like that can’t happen. We were told to cut our files into smaller files and process them ‘over time’ and to run the jobs off hours. Something isn’t set up right with our system. I know there are things to manage a jobs (the systems) recourses (I’m just a programmer so I have no idea what these things are or do, but I ran across them browsing the internet). I saw things like WLM, SCHEMV, SeviceClass, Thruput Manager, Performance Goals, Execution Bach Monitor, Scheduling Priority, JES 2 PERFORM & JOBPRTY & /*PRIORTY, Priorty Level, and settings for JOBCLASS and setup settings for initiators. So, really… something in all of that should make it possible for me to put something in my JCL… or for the system to somehow know to run these jobs as “take more time and use less CPU” so that I can run these big delete files during the day without impacting PRODuction.