View previous topic :: View next topic |
Author |
Message |
MarcinD Beginner
Joined: 01 Jun 2008 Posts: 3 Topics: 1 Location: Poland
|
Posted: Mon Jun 02, 2008 3:26 pm Post subject: Other side of question about SB37 |
|
|
Hi all,
I suffered SB37 abend error by some of my jobs. I suppose what is the reason - read a lot and have own theory (it's about my environment). Anyway I'm not sure how to detect B37 - with reason code 04 and case:
"The data set already had 16 extents, but required more space"
There are about 50 jobs and few thousends on environment. Some of them can work in the same time and almost all need disk space. In my case, from my 50 jobs in one processing, 3-4 jobs can abend. The abended jobs I can rerun, and usually they end ok and sometimes I need 3 or 4 tries - it looks randomly. In next processing, other 2-3 jobs can abend but usually they arent the same - looks randomly.
I supposed that problem is with free place on volume. My question is:
Are there any traces/tries logs during alocating individual extents ?
Any ideas ? _________________ --
Marcin |
|
Back to top |
|
 |
kolusu Site Admin

Joined: 26 Nov 2002 Posts: 12378 Topics: 75 Location: San Jose
|
|
Back to top |
|
 |
MarcinD Beginner
Joined: 01 Jun 2008 Posts: 3 Topics: 1 Location: Poland
|
Posted: Thu Jun 05, 2008 2:05 am Post subject: |
|
|
Hi Kolusu thx for your reply. Problem is that I've never seen logs from case "The data set already had 16 extents, but required more space". For now I've got problem only for one environment and I have no access for easy testing on it (big and complex organization). Could you provide an sample log for that case of SB37? _________________ --
Marcin |
|
Back to top |
|
 |
Nic Clouston Advanced
Joined: 01 Feb 2007 Posts: 1075 Topics: 7 Location: At Home
|
Posted: Thu Jun 05, 2008 2:47 am Post subject: |
|
|
How are you expected to fix a problem if you do not see the problem informtion? A B37 'log' will probbly not help you for a 'standard B37. You may need the reason code though if following the guidelines in the link offered by Kolusu does not resolve your problem. You get the reason code in several places in the messages produced by the job on the output spool and will generally be near the top of the listing on SDSF.
If 16 extents are being used then you probably just need to follow the guidelines - work out how many records there are, work out the space required and then work out the aloctions needed. NOTE: some people are not aware that if there is not enough space for the primary allocation it may use some of its 15 secondary extents to get its primary allocation. _________________ Utility and Program control cards are NOT, repeat NOT, JCL. |
|
Back to top |
|
 |
expat Intermediate

Joined: 01 Mar 2007 Posts: 475 Topics: 9 Location: Welsh Wales
|
Posted: Sat Jun 07, 2008 4:32 am Post subject: |
|
|
If your volume is badly fragmented this may be the cause of the problem. Each allocation that you specify in your JCL can be satisfied by up to 5 extents, so for a really bad example, The primary allocation is met using 5 extents, then the first secondary takes another 5 extents, and then the second secondary, so you have used up 15 of your 16 extents for only one primary and two secondary allocations.
Might be worth having a chat with your storage bods to see when ( or if ) they ever perform volume defrags. Unfortunately with virtual DASD this is a practice often discarded .......................... in error |
|
Back to top |
|
 |
MarcinD Beginner
Joined: 01 Jun 2008 Posts: 3 Topics: 1 Location: Poland
|
Posted: Sun Jun 15, 2008 12:59 pm Post subject: |
|
|
Expat - interesting point. Jobs crashed only when was trying to allocate temporary files. Here, temporary files are created on separate volumen. I'm not an administrator but I imagine that temporary files are deleted when STEP has ended.
For now i test new allocations ... I set from 150,50 to 49,35 and in this moment all jobs work fine. I dont know why is that, because I didnt change allocation for other jobs which crashed before too. Now all work fine :S _________________ --
Marcin |
|
Back to top |
|
 |
expat Intermediate

Joined: 01 Mar 2007 Posts: 475 Topics: 9 Location: Welsh Wales
|
Posted: Sat Jun 21, 2008 5:10 am Post subject: |
|
|
Even volume used exclusively for temporary datasets can get screwed too. If the OS drops and needs to be IPL'd, then all the temp datasets that were in the VTOC at that time that the OS dropped / locked will still be there.
Yet another discipline so easily overlooked by the the under educated storage people of today, clean your temporary volumes on a regular basis !!!
At one shop I analysed the temp dataset pool and horror amongst horrors, over 55% was used up by old datasets that were over 90 days old, and in one case over 2 years old  _________________ If it's true that we are here to help others,
then what exactly are the others here for ? |
|
Back to top |
|
 |
|
|