size (in bytes) of characters after binary streaming operation

the_net_2.0

Banned
Local time
Today, 00:12
Joined
Sep 6, 2010
Messages
812
All,

I have a program that streams access data from tables into binary files using this code for I/O:

Code:
Open "c:\file.db" For Binary Access Write As #FileNum

most of the I/O code will do something like this:

Code:
dim Data as integer
Data = 5
Put #FileNum, , Data

My question is: Access stores integer variables in memory as 2-byte signed integers. So how it this going into the binary file? Is it going as 2 bytes in size?

I am using the Hex Editor called UltraEdit to view the binary files after they're created. But I can't tell (mostly because I'm new to using it) how big these values are when they come in. UltraEdit detects file changes automatically, so I can stream data in and then refresh the file after every change to see these values populate in the binary file one by one. But the question still remains, how big, in bytes, is data coming in when it's transferred from Access memory?

The reason I need to know is simply because we have customer specs that tell us how big each piece of binary data is supposed to be when they receive the files and load them into a larger system.

Thanks!
 
If the text files are essentially a list of binary sequence of a certain size, then I'd just create a table with BINARY(N) with N being the size of the binary and insert into it. Recall that Byte() and String data types can be used for binary manipulations
 
If the text files are essentially a list of binary sequence of a certain size, then I'd just create a table with BINARY(N) with N being the size of the binary and insert into it. Recall that Byte() and String data types can be used for binary manipulations

None of that really answered the question, sir.

I don't have the power to change the way we do things, so suggestion 1 is out. With 2, are you saying that casting data with CBYTE() manipulates the size and therefore is transferred to the .db file that way? That's really what I want to know.
 
If your computer is running on an Intel processor the file would be created little-endian.

This means that the least significant byte would go into the file first and then the most significant byte would follow. In the example you gave of 5 the file would be 05 00.

If the processor was big-endian then the bytes would be swapped, though I can’t test that.

In either case, a 16 bit integer would only consume 16 bits and therefore 2 bytes.

Chris.
 
In the example you gave of 5 the file would be 05 00.
.

Chris,

that's exactly what Ultra Edit looks like. The reason for this post is simply that we're moving to vb.net, most likely. And of course there are differences where the memory storage changes for integers, and maybe other data types too.

I guess my curiosity out of what you just said would be this:

Would the maximum integer value in Access (32767) transfer into my binary file and read 99 99??

On a side note, our customer spec sheets tell us whether the binaries are to be in little endian or big endian, so that is not concern. The coding already has that covered, and transfers data the correct way. :)
 
>>Would the maximum integer value in Access (32767) transfer into my binary file and read *99 99*?? <<
Why on Earth would it read as 99 99?

For 32767 little-endian it would be:

FF 7F hex
255 127 decimal
11111111 01111111 binary

It just depends on how you view it.

But you should be able to test this for yourself…
Code:
Sub TestIt()
    Dim Data As Integer
    Dim FileNum As Integer
    
    FileNum = FreeFile
    
    Open "c:\file.db" For Binary Access Write As #FileNum
    
    Data = 32767
    Put #FileNum, , Data

    Close #FileNum

End Sub

Now because the file is opened for Binary then that’s what is in the file, Binary.
Moving to vb.net should have absolutely nothing to do with it unless vb.net can’t write or read binary files. And if vb.net can’t write or read binary files I would suggest not moving to it.


Chris.
 
sorry about that! Yes, I meant FF FF Whops! Thinking in base 10 instead of base 16 here! :)
 
Well then, why on Earth would it read FFFF?

FFFF has all bits set and therefore would be -1 not 32767, big-endian or little-ending.

Now if you think about that you can see why both True and False are the same in big-endian and little-ending processors:-

00000000 00000000 = False (swap bytes and it’s the same)
11111111 11111111 = True (swap bytes and it’s the same)

Take the NOT of either and it produces the other, big-endian or little-ending.

So at least Intel, Dec and Motorola agreed on something. ;)
 

Users who are viewing this thread

Back
Top Bottom