View previous topic :: View next topic |
Author |
Message |
THRIVIKRAM Beginner
Joined: 03 Oct 2005 Posts: 70 Topics: 34
|
Posted: Tue Jul 29, 2008 8:56 am Post subject: Dynamically Create next GDG version if current GDG is full |
|
|
Hi All,
I have a batch job that executes 24/6. It writes out into error GDG file for those records discarded. Problem is, since this job runs continuously the error file is filled up on the second day (approximately) and the job is getting down. Is there a way to avoid this and write to the next generation as soon as the current GDG version is filled up. I already tried increasing the space in job but it did not work as more space is needed. Since this is a new project, we are not sure of the approximate errors that the job creates daily.
Thanks!! |
|
Back to top |
|
 |
taltyman JCL Forum Moderator

Joined: 02 Dec 2002 Posts: 310 Topics: 8 Location: Texas
|
Posted: Tue Jul 29, 2008 9:14 am Post subject: |
|
|
Do you need the error file for later processing or just to see what errors are occuring? Maybe just send it to sysout instead of a dataset. |
|
Back to top |
|
 |
THRIVIKRAM Beginner
Joined: 03 Oct 2005 Posts: 70 Topics: 34
|
Posted: Tue Jul 29, 2008 10:46 am Post subject: |
|
|
I need the Error file for later analysing what went wrong. I can try writing into Sysout, but that would be huge too... |
|
Back to top |
|
 |
Bill Dennis Advanced

Joined: 03 Dec 2002 Posts: 579 Topics: 1 Location: Iowa, USA
|
Posted: Tue Jul 29, 2008 11:15 am Post subject: |
|
|
The program would need to be intelligent enough to CLOSE and reOPEN the file to create a new generation.
A better solution would be to write the GDG file to tape where size is not a problem. Be sure to specify a volume count in the JCL if the file will be over 5 volumes. _________________ Regards,
Bill Dennis
Disclaimer: My comments on this foorum are my own and do not represent the opinions or suggestions of any other person or business entity. |
|
Back to top |
|
 |
jyoung Beginner
Joined: 10 Nov 2005 Posts: 36 Topics: 2 Location: Flint, MI
|
Posted: Wed Jul 30, 2008 2:01 pm Post subject: |
|
|
What if you determined the maximum number of records that the file can hold then keep a record count of the records written and when the get close to the maximum close the file and reopen a new one. |
|
Back to top |
|
 |
dbzTHEdinosauer Supermod
Joined: 20 Oct 2006 Posts: 1411 Topics: 26 Location: germany
|
Posted: Thu Jul 31, 2008 12:56 am Post subject: |
|
|
Quote: |
What if you determined the maximum number of records that the file can hold then keep a record count of the records written and when the get close to the maximum close the file and reopen a new one
|
unnecessary to count, you will get an error when trying to write which could trigger your logic to close and open a new file.
PROBLEM IS: what is the new (second) file name?
if the program closes and opens a new file, even with +1, since it is the same step it will overwrite the existing - won't it?
Stop trying to solve silly problems with sillier solutions. allocate enough dasd. or, run the job 24/1.
Possibly abreviate your messages.
I would also start solving the problems that all your error records indicate. _________________ Dick Brenholtz
American living in Varel, Germany |
|
Back to top |
|
 |
expat Intermediate

Joined: 01 Mar 2007 Posts: 475 Topics: 9 Location: Welsh Wales
|
Posted: Thu Jul 31, 2008 1:53 am Post subject: |
|
|
Quote: | if the program closes and opens a new file, even with +1, since it is the same step it will overwrite the existing - won't it? |
Damned good spot. But it would reduce the number of errors very quickly
The OP would need an allocation routine that uses absolute generation rather than relative.
Tape does seem a sensible option, but then means the OP has a lot of downloading to do to read the file later. _________________ If it's true that we are here to help others,
then what exactly are the others here for ? |
|
Back to top |
|
 |
jyoung Beginner
Joined: 10 Nov 2005 Posts: 36 Topics: 2 Location: Flint, MI
|
Posted: Thu Jul 31, 2008 12:41 pm Post subject: |
|
|
Ok here is another silly idea...
Define the output file as a sequential file with a disp of old. When the job is done kick off another job to offload the data to somewhere - anywhere(FTP?). When you start up the next time it will over write it, but you will have the data from the previous time stored somewhere else. |
|
Back to top |
|
 |
dbzTHEdinosauer Supermod
Joined: 20 Oct 2006 Posts: 1411 Topics: 26 Location: germany
|
Posted: Fri Aug 01, 2008 1:00 am Post subject: |
|
|
iyoung,
that was a silly idea.
OP never did say what the allocation and dcb parms are. he could be writing a 10,000 byte error record and allocating 1 cylinder. _________________ Dick Brenholtz
American living in Varel, Germany |
|
Back to top |
|
 |
THRIVIKRAM Beginner
Joined: 03 Oct 2005 Posts: 70 Topics: 34
|
Posted: Fri Aug 01, 2008 7:55 am Post subject: |
|
|
Quote: |
Stop trying to solve silly problems with sillier solutions. allocate enough dasd. or, run the job 24/1.
Possibly abreviate your messages.
I would also start solving the problems that all your error records indicate.
|
Hi dbzTHEdinosauer,
a) Unfortunately its not the Silly problem for me. I tried contacting the DASD even before posting here. But they are not willing to give additional DASD for this project since it was not requested during the initial phase of the project. Initially the error file was 80 byte(FB) record but due to some changes in the design later stage we increases that to 11881 (VB).
b) Running the job 24/1 is something that the Operations to decide. Generally, here critical jobs are run 24/6 and on sunday they are down for only 30 min or so.
c) We already started working on the error logs. Since the support guys are beeped for this job going down with Space problem, I was thinking of any ways to avoid that.Thats the reason of posting here.....
Thanks!!
Hi jyoung,
Not sure if I can open the same file to read while another job is writing records into it. Do you want my job to stop while the FTPing is done....or did I miss something here?
Thanks!! |
|
Back to top |
|
 |
Bill Dennis Advanced

Joined: 03 Dec 2002 Posts: 579 Topics: 1 Location: Iowa, USA
|
Posted: Fri Aug 01, 2008 8:19 am Post subject: |
|
|
thrivikram,
what about going to tape? can you have a tape drive tied up 24/6? _________________ Regards,
Bill Dennis
Disclaimer: My comments on this foorum are my own and do not represent the opinions or suggestions of any other person or business entity. |
|
Back to top |
|
 |
jsharon1248 Intermediate
Joined: 08 Aug 2007 Posts: 291 Topics: 2 Location: Chicago
|
Posted: Fri Aug 01, 2008 8:40 am Post subject: |
|
|
What is the average record size? Please don't guess. Run a HISTOGRM to determine the average. How many records? Seems to me that the space is available whether or not it was requested. If you keep running the job, the records are being written somewhere. Also, post the actual z/OS error messages.
If you can manage to update the programs generating the messages, put a threshold on each message type. Write the individual messages until you hit the threshold. After you hit the threshold, bypass the write, and only generate a message the next time you hit the threshold stating that x number of messages of that type were bypassed.
One other idea. Modify this output file to a rolling log similar to a DBMS. Create an RRDS VSAM file with x number of records. When you get to x, reset the counter to 1 and write over the file from the beginning. Not a perfect solution, but it might buy you some time.
However, by addressing the space issue, you're avoiding the root cause. Why is this process generating so many error messages? Solve that, and you don't need to worry about space. |
|
Back to top |
|
 |
Terry_Heinze Supermod
Joined: 31 May 2004 Posts: 391 Topics: 4 Location: Richfield, MN, USA
|
Posted: Mon Aug 04, 2008 11:19 pm Post subject: |
|
|
jsharon1248 wrote: | ...However, by addressing the space issue, you're avoiding the root cause. Why is this process generating so many error messages? Solve that, and you don't need to worry about space. | Sadly, another case of treating the symptom, not the cause.  _________________ ....Terry |
|
Back to top |
|
 |
jgr Beginner
Joined: 22 Sep 2008 Posts: 3 Topics: 0
|
Posted: Mon Sep 22, 2008 10:45 pm Post subject: |
|
|
Is this still an issue, THRIVIKRAM? If so, it might be resolvable via a data management exit and SVC 99, but I'd need more information... |
|
Back to top |
|
 |
Dibakar Advanced

Joined: 02 Dec 2002 Posts: 700 Topics: 63 Location: USA
|
Posted: Sun Sep 28, 2008 1:05 pm Post subject: |
|
|
Quote: |
Since the support guys are beeped for this job going down with Space problem, I was thinking of any ways to avoid that.Thats the reason of posting here.....
|
I don't know what steps you take after the job goes down but maybe you can add another step in the job to submit itself when there is an abend, that way you will continue with next generation. |
|
Back to top |
|
 |
|
|