What happens in the line:
unsigned int j = *f;
is simply assigning the first element of f to the integer j. It is equivalent to:
unsigned int j = f[0];
and since f[0] is 0 it is really just assigning a 0 to the integer:
unsigned int j = 0;
You will have to convert the elements of f.
Reinterpretation will always cause undefined behavior. The following example shows such usage and it is always incorrect:
unsigned int j = *( unsigned int* )f;
Undefined behavior may produce any result, even apparently correct ones. Even if such code appears to produce correct results when you run it for the first time, this isn't proof that the program is defined. The program is still undefined, and may produce incorrect results at any time.
There is no such thing as technically undefined behavior or generally works, the program is either undefined or not. Relying on such statements is dangerous and irresponsible.
Luckily we don't have to rely on such bad code.
All you need to do is choose the representation of the integer that will be stored in f, and then convert it. It appears you want to store in big-endian, with at most 8 bits per element. This doesn't mean that the machine must be big-endian, only the representation of the integer you're encoding in f. Representation of integers on the machine is not important, as this method is completely portable.
This means the most significant byte will appear first. The most significant byte is f[0], and the least significant byte is f[3].
We will need an integer capable of storing at least 32 bits and type unsigned long does this.
Type char is for used storing characters not integers. An unsigned integer type like unsigned char should be used.
Then only the conversion from big-endian encoded in f must be done:
unsigned char encoded[4] = { 0 , 0 , 0 , 1 };
unsigned long value = 0;
value = value | ( ( ( unsigned long )encoded[0] & 0xFF ) << 24 );
value = value | ( ( ( unsigned long )encoded[1] & 0xFF ) << 16 );
value = value | ( ( ( unsigned long )encoded[2] & 0xFF ) << 8 );
value = value | ( ( ( unsigned long )encoded[3] & 0xFF ) << 0 );
0000from the 1st byte?