Blocking saves space? Usually, but not always. For example, the "optimal" DCB parameters for a 3390 are usually thought to be RECFM=FB, LRECL=80, BLKSIZE=27920. Well, if BLKSIZE=27920 is good, isn't BLKSIZE=28000 better? Or maybe BLKSIZE=32720? Well, no. BLKSIZE=27920 permits two physical records per track, for 698 80 byte records per track. BLKSIZE=28000 permits one physical record per track, for 350 80 byte records per track. The remainder of the track is not used.
What about a load module data set with RECFM=U, BLKSIZE =32760? It does not look good! Actually, it's OK, sort of, for several reasons.
- The Binder, for reasons I won't document here, rarely writes 32760 byte blocks. Even when it does write a very large block, it never writes two consecutive very large blocks. A large block is always followed by at least one, and usually several, very short blocks.
- Just to confuse the issue more, when the Binder thinks it is ready to write a very large block, it determines if the block it wants to write will fit on the current output track. If not, it writes the largest block that will actually fit on the track. The next short record becomes the first record on the next track. This sounds good, and it is good until the data set is copied. This complicated record structure may not work well on the new device. Then again long, short, ..., long, short, ... may not work well on any device!