Re: Utility to write huge files instantly???



On Jan 24, 6:52 pm, Bill Todd <billt...@xxxxxxxxxxxxx> wrote:
robertwess...@xxxxxxxxx wrote:
Well, the idea was that he'd be able to allocate space to a file
without having the OS zero all that space.

Yes - and there's no intrinsic reason why SetFileValidData should have
any effect on that:  the normal byte-granularity end-of-data marker
should suffice regardless of where the allocation ends, unless it's
maintained only as a small integer offset within the last file cluster
rather than as a 64-bit integer.


My interest in testing this behavior has limits, and I didn't actually
try it, but my understanding is that SetFileValidData will cause the
file space that's been allocated, but not yet written into or
initialized, to be marked as valid. IOW, bypassing the zeroing. I'd
try it, but am insufficiently motivated to jump through the required
security hoops.


allocate space without zeroing it (but will read it as zeros), so long
as the volume is NTFS.  That makes a certain sense, since there's no
place in FAT to store such information.

A quick search of mike's postings to other newsgroups seems to indicate
that his problem (zeroing the space allocated) occurred at Close time,
not at allocation time.  He says that SetFileValidData (to a small
value) before closing the file did alleviate that, but not whether it
also released the unused space at Close (which would be consistent with,
say, maintaining ValidDataLength only in RAM rather than on disk).



He does appear to zero pages as needed if you write actual data in the
file.  So while the allocation is very quick, seeking to the end of
the file and writing a byte gets the entire file zeroed.

OTOH, you *can* read all the zeros without delay.

Hmmm.  If you can do that *without* using SetFileValidData, then
apparently SetEndOfFile is moving the end-of-data mark rather than just
allocating space - unlike (I think, though I haven't tried it) the case
with using the NtCreateFile approach with an AllocationSize.

And in that case using SetFileValidData to move the end-of-data mark
back to the start of the file would avoid the zeroing on Close that mike
saw (without necessarily deallocating the space).


It's definitely allocating space. The free space on the memory stick
goes down, and you appear to get a nice big contiguous allocation.

I did a little checking, and the mechanism is actually pretty straight
forward. In NTFS, a file is defined by a collection of "attributes"
in a MFT block. File contents are stored in $Data attributes, of
while there can be more than one (in fact, one is needed for each
contiguous block of allocated block disk space). Attributes come in
two flavors - resident and non-resident. Resident attributes are
stored completely within the MFT, and are interesting mainly for small
files (so you'd likely be able to store the data from a 100 byte file
completely within the MFT). A non-resident attribute is a pointer to
the run of blocks on the disk where the attribute is actually stored.
A non-resident attribute includes (among other things): Allocated
size, Actual Size, and Initialized Size.

Initialized size gets set to zero when you allocate space with the
SetFilePointer technique, and increased as needed (by actually
initializing the space to zero) to fill up any space at the beginning
when data gets written into that run. My understanding is that in
NTFS, SetFileValidData just bumps the Initialized Size in the affected
$Data attributes as needed (assuming the space is already allocated).
.