Discussion:
problem with -ffloat-store
(too old to reply)
principiante
2004-11-23 19:47:22 UTC
Permalink
I have posted this msg on gnu.gcc, but I have seen that similar
questions are answered here, so I post it again. Sorry for that.

Hi,

I am studying the following classical example to show the pitfalls
existing with extended-precision systems (like the x86 family):

main() {

float a,b,c;

a = 1;
b = 3;

c = a/b;

if (c == a/b) {
printf("Equal\n");
} else {
printf("Not Equal\n");
}

}

When compiled with gcc (no options), the programs outputs "Not Equal";
this is because in the "if" test the expression a/b is computed with
extra precision and the comparison is made in the processor registers.

So far so good. Now, from the man page:

-ffloat-store
Do not store floating point variables in registers, and inhibit
other options that might change whether a floating point value is
taken from a register or memory.

Unfortunately, -ffloat-store has no effect on the result. I expected
"Equal", but I still got "Not Equal".

What am I doing wrong?

# gcc -v
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/specs
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --enable-shared --enable-threads=posix
--disable-checking --with-system-zlib --enable-__cxa_atexit
--host=i386-redhat-linux
Thread model: posix
gcc version 3.2.2 20030222 (Red Hat Linux 3.2.2-5)

Thanks,

8-P

ps: just for the sake of completeness, when compiled with "gcc -O9" the
program above outputs "Equal".
Michael Mair
2004-11-23 20:20:22 UTC
Permalink
Post by principiante
I have posted this msg on gnu.gcc, but I have seen that similar
questions are answered here, so I post it again. Sorry for that.
Hi,
I am studying the following classical example to show the pitfalls
main() {
float a,b,c;
a = 1;
b = 3;
c = a/b;
if (c == a/b) {
printf("Equal\n");
} else {
printf("Not Equal\n");
}
}
When compiled with gcc (no options), the programs outputs "Not Equal";
this is because in the "if" test the expression a/b is computed with
extra precision and the comparison is made in the processor registers.
-ffloat-store
Do not store floating point variables in registers, and inhibit
other options that might change whether a floating point value is
taken from a register or memory.
Unfortunately, -ffloat-store has no effect on the result. I expected
"Equal", but I still got "Not Equal".
What am I doing wrong?
Not reading a good C book... :-)
a/b is evaluated in at least double precision.
If you cast the result, (float)(a/b), there will not be any
problems even _without_ -ffloat-store.
The _interesting_ case is when using doubles. Then gcc ignores
a (double) cast because it usually is held that the excess
precision does not hurt and getting rid of it costs too much time.
This is clearly against the requirements of either C standard.
And there, -ffloat-store and other architecture-specific options
enforce conforming behaviour.
BTW: Storing values in a variable and forcing the variable through
*((volatile double *)&variable) gives you _true_ doubles even if the
variable itself is fraught with excess precision.


Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
principiante
2004-11-23 20:34:54 UTC
Permalink
Post by Michael Mair
Post by principiante
I have posted this msg on gnu.gcc, but I have seen that similar
questions are answered here, so I post it again. Sorry for that.
Hi,
I am studying the following classical example to show the pitfalls
main() {
float a,b,c;
a = 1;
b = 3;
c = a/b;
if (c == a/b) {
printf("Equal\n");
} else {
printf("Not Equal\n");
}
}
When compiled with gcc (no options), the programs outputs "Not Equal";
this is because in the "if" test the expression a/b is computed with
extra precision and the comparison is made in the processor registers.
-ffloat-store
Do not store floating point variables in registers, and inhibit
other options that might change whether a floating point value is
taken from a register or memory.
Unfortunately, -ffloat-store has no effect on the result. I expected
"Equal", but I still got "Not Equal".
What am I doing wrong?
Not reading a good C book... :-)
a/b is evaluated in at least double precision.
If you cast the result, (float)(a/b), there will not be any
problems even _without_ -ffloat-store.
main() {
float a,b,c;
a = 1;
b = 3;
c = a/b;
if (c == (float)a/b) {
printf("Equal\n");
} else {
printf("Not Equal\n");
}
}

returns "Not Equal" with or without -ffloat-store.
Post by Michael Mair
The _interesting_ case is when using doubles. Then gcc ignores
a (double) cast because it usually is held that the excess
precision does not hurt and getting rid of it costs too much time.
This is clearly against the requirements of either C standard.
And there, -ffloat-store and other architecture-specific options
enforce conforming behaviour.
main() {
double a,b,c;
a = 1;
b = 3;
c = a/b;
if (c == a/b) {
printf("Equal\n");
} else {
printf("Not Equal\n");
}
}

returns "Not Equal" with or without -ffloat-store (even with the
(double) cast).

Instead,

main() {
long double a,b,c;
a = 1;
b = 3;
c = a/b;
if (c == a/b) {
printf("Equal\n");
} else {
printf("Not Equal\n");
}
}

returns "Equal". So it seems that in the other cases a/b in the if is
computed with extra precision, regardless the presence of -ffloat-store.

???????

Thanks, Ale
Michael Mair
2004-11-24 00:04:38 UTC
Permalink
[snip]
Post by principiante
Post by Michael Mair
Post by principiante
I am studying the following classical example to show the pitfalls
main() {
float a,b,c;
a = 1;
b = 3;
c = a/b;
if (c == a/b) {
printf("Equal\n");
} else {
printf("Not Equal\n");
}
}
When compiled with gcc (no options), the programs outputs "Not Equal";
this is because in the "if" test the expression a/b is computed with
extra precision and the comparison is made in the processor registers.
-ffloat-store
Do not store floating point variables in registers, and inhibit
other options that might change whether a floating point value is
taken from a register or memory.
Unfortunately, -ffloat-store has no effect on the result. I expected
"Equal", but I still got "Not Equal".
What am I doing wrong?
Not reading a good C book... :-)
a/b is evaluated in at least double precision.
If you cast the result, (float)(a/b), there will not be any
problems even _without_ -ffloat-store.
Have to correct that: Depending on optimisation(s).
Post by principiante
main() {
float a,b,c;
a = 1;
b = 3;
c = a/b;
if (c == (float)a/b) {
c == (float)(a/b), if any.
Mind the operator precedence. What do you think I put the
parentheses in for?
Post by principiante
printf("Equal\n");
} else {
printf("Not Equal\n");
}
}
returns "Not Equal" with or without -ffloat-store.
Post by Michael Mair
The _interesting_ case is when using doubles. Then gcc ignores
a (double) cast because it usually is held that the excess
precision does not hurt and getting rid of it costs too much time.
This is clearly against the requirements of either C standard.
And there, -ffloat-store and other architecture-specific options
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Post by principiante
Post by Michael Mair
enforce conforming behaviour.
[snip: program in double]
Post by principiante
returns "Not Equal" with or without -ffloat-store (even with the
(double) cast).
Instead,
[snip: program in long double]
Post by principiante
returns "Equal". So it seems that in the other cases a/b in the if is
computed with extra precision, regardless the presence of -ffloat-store.
Mind the marked portion. I do not know what architecture you are on
but if it is a x86 with SSE(2), using -ffloat-store -mfpmath=sse -msse2
will probably do away with your problems. See the gcc manual for
details.
For me, depending on the optimisations, floats work or do not, whereas
doubles work nearly never without explicit options/enforcing an
evaluation in the right type. Mind, however, that

#include <stdio.h>
#include <float.h>

#define ENFORCE_FLOAT(exp, tmp) ((tmp)=(exp),*(volatile float *)&(tmp))

int main (void)
{
float a,b,c,d;
a = 1.0F;
b = 3.0F;
c = a/b;
if ( c == ENFORCE_FLOAT(a/b,d) ) { // <---------
printf("Equal\n");
} else {
printf("Not Equal\n");
}

return 0;
}

will always work, whereas

if ( c == (float)(d=a/b) ) { // <---------

may need the compiler options, float or double notwithstanding.


Nonetheless, your original program and your "corrections" were both
not apt to expose the excess precision weaknesses as there were
still C issues.


BTW: If you have inline functions (either due to using GNU extensions
or using the C99 mode), rather use them instead of the macro:

inline float EnforceFloat (float expression)
{
float tmp = expression;
return *((volatile float *)&tmp);
}

This can be optimised very well and rids you of the need for a
temporary value.


Finally: You can set the precision of floating point evaluations
to 64 instead of 80 Bit if necessary.


Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
principiante
2004-11-24 07:26:58 UTC
Permalink
(gnam)
Post by Michael Mair
Post by principiante
Post by Michael Mair
Not reading a good C book... :-)
a/b is evaluated in at least double precision.
If you cast the result, (float)(a/b), there will not be any
problems even _without_ -ffloat-store.
Have to correct that: Depending on optimisation(s).
Post by principiante
main() {
float a,b,c;
a = 1;
b = 3;
c = a/b;
if (c == (float)a/b) {
c == (float)(a/b), if any.
Mind the operator precedence. What do you think I put the
parentheses in for?
Post by principiante
printf("Equal\n");
} else {
printf("Not Equal\n");
}
}
main() {
float a,b,c;
a = 1;
b = 3;
c = a/b;
if (c == (float)(a/b)) {
printf("Equal\n");
} else {
printf("Not Equal\n");
}
}

still returns "Not Equal" (regardless of -ffloat-store).

Later I will reply about the other cases.

Ciao Ale

ps: my archituctire is

# gcc -v
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/specs
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --enable-shared --enable-threads=posix
--disable-checking --with-system-zlib --enable-__cxa_atexit
--host=i386-redhat-linux
Thread model: posix
gcc version 3.2.2 20030222 (Red Hat Linux 3.2.2-5)
principiante
2004-11-24 12:18:03 UTC
Permalink
Michael Mair wrote:
(gnam)
Post by Michael Mair
Mind the marked portion. I do not know what architecture you are on
but if it is a x86 with SSE(2), using -ffloat-store -mfpmath=sse -msse2
will probably do away with your problems. See the gcc manual for
details.
Infact it works, thanks.
Post by Michael Mair
For me, depending on the optimisations, floats work or do not, whereas
doubles work nearly never without explicit options/enforcing an
evaluation in the right type. Mind, however, that
#include <stdio.h>
#include <float.h>
#define ENFORCE_FLOAT(exp, tmp) ((tmp)=(exp),*(volatile float *)&(tmp))
int main (void)
{
float a,b,c,d;
a = 1.0F;
b = 3.0F;
c = a/b;
if ( c == ENFORCE_FLOAT(a/b,d) ) { // <---------
printf("Equal\n");
} else {
printf("Not Equal\n");
}
return 0;
}
will always work, whereas
if ( c == (float)(d=a/b) ) { // <---------
may need the compiler options, float or double notwithstanding.
Nonetheless, your original program and your "corrections" were both
not apt to expose the excess precision weaknesses as there were
still C issues.
Maybe there is a misunderstanding here. I think that the excess
precision is perfectly fine, the weakness stands in the interaction
between C and the excess-precision floating point engine.

Apparently, there is no way to ensure portability; if run on
a processor without excess precision, my example with the
double type would give the expected result.

See for instance

http://cch.loria.fr/documentation/IEEE754/ACM/addendum.html

which is an Addendum (by David Goldberg) of the well-known
paper What Every Computer Scientist Should Know About Floating-Point
Arithmetic, by Kahan.

To ensure portability, it seems that the only solution is to force
the compiler to store each result of a single computation in
the right precision memory slot.

Thanks for your attention,

Ale
Michael Mair
2004-11-24 13:21:07 UTC
Permalink
[snip: Demonstration of floating point excess precision "problem",
gcc options to get rid of it]
Post by principiante
Post by Michael Mair
Nonetheless, your original program and your "corrections" were both
not apt to expose the excess precision weaknesses as there were
still C issues.
Maybe there is a misunderstanding here. I think that the excess
precision is perfectly fine, the weakness stands in the interaction
between C and the excess-precision floating point engine.
Well, the problem is not exactly a C problem as the C semantics
work fine; the problem is that gcc breaks the C semantics on
purpose to cope with a broken architecture.
C89 does not tell you anything at all about floats and doubles
apart from <FLT/DBL/LDBL>_<EPSILON/MAX/MIN> (and the derived
stuff) and we only demand 6/10/10 digits precision and exponents
ranging from 10^-37 to 10^37.
This is completely acceptable. Apart from that, the trick with
the volatile-pointer evaluation works portably.
In C99, you can find out the rounding mode, the evaluation mode
and -- if available (with gcc never) -- can be sure that IEC 60559
(IEEE 754) support is available if __STD_IEC_559__ is defined,
have access to the floating point environment and so on.
Post by principiante
Apparently, there is no way to ensure portability; if run on
a processor without excess precision, my example with the
double type would give the expected result.
Yes. Which is why you enforce this behaviour by either making
your whole environment slow when compiling or enforcing the
correct precision with a cast (or the working replacement)
where it is necessary.
Portability is mostly not possible across different architectures
if you do not use a language enforcing the use of IEEE floats
and doubles.
Post by principiante
See for instance
http://cch.loria.fr/documentation/IEEE754/ACM/addendum.html
which is an Addendum (by David Goldberg) of the well-known
paper What Every Computer Scientist Should Know About Floating-Point
Arithmetic, by Kahan.
I rather like the version on the Sun server where they have
another addendum suggesting to take up the C99 idea of defining
integer types like int8_t, int16_t, ..., int_least8_t, ... int_fast8_t,
... and so on and use it on floating point variables.
Post by principiante
To ensure portability, it seems that the only solution is to force
the compiler to store each result of a single computation in
the right precision memory slot.
Once again: Portability is usually not possible. Even if you have
the same base and the same mantissa/exponent bits, the architecture
may round differently for representation or operations.
Apart from that, C demands that all operations with floats are
evaluated in at least double precision which can give you problems with
IEEE 754 conformance.
Usually, all people pretend that the involved floating point numbers
were IEEE 754 conforming and that there are no differences in
rounding.

BTW: The respective parts of the GSL documentation also are good to
read to get started with floating point arithmetic problems and
possible solutions.


-Michael
--
E-Mail: Mine is a gmx dot de address.
principiante
2004-11-24 13:48:38 UTC
Permalink
Post by Michael Mair
[snip: Demonstration of floating point excess precision "problem",
gcc options to get rid of it]
Post by principiante
Post by Michael Mair
Nonetheless, your original program and your "corrections" were both
not apt to expose the excess precision weaknesses as there were
still C issues.
Maybe there is a misunderstanding here. I think that the excess
precision is perfectly fine, the weakness stands in the interaction
between C and the excess-precision floating point engine.
Well, the problem is not exactly a C problem as the C semantics
work fine; the problem is that gcc breaks the C semantics on
purpose to cope with a broken architecture.
C89 does not tell you anything at all about floats and doubles
apart from <FLT/DBL/LDBL>_<EPSILON/MAX/MIN> (and the derived
stuff) and we only demand 6/10/10 digits precision and exponents
ranging from 10^-37 to 10^37.
This is completely acceptable. Apart from that, the trick with
the volatile-pointer evaluation works portably.
In C99, you can find out the rounding mode, the evaluation mode
and -- if available (with gcc never) -- can be sure that IEC 60559
(IEEE 754) support is available if __STD_IEC_559__ is defined,
have access to the floating point environment and so on.
Post by principiante
Apparently, there is no way to ensure portability; if run on
a processor without excess precision, my example with the
double type would give the expected result.
Yes. Which is why you enforce this behaviour by either making
your whole environment slow when compiling or enforcing the
correct precision with a cast (or the working replacement)
where it is necessary.
Infact the cast doens't work. This was quite a surprise for me.
"if (c == a/b)" and "if (c == (float)(a/b))" give exactly
the same code.
Post by Michael Mair
Portability is mostly not possible across different architectures
if you do not use a language enforcing the use of IEEE floats
and doubles.
Post by principiante
See for instance
http://cch.loria.fr/documentation/IEEE754/ACM/addendum.html
which is an Addendum (by David Goldberg) of the well-known
paper What Every Computer Scientist Should Know About Floating-Point
Arithmetic, by Kahan.
I rather like the version on the Sun server where they have
another addendum suggesting to take up the C99 idea of defining
integer types like int8_t, int16_t, ..., int_least8_t, ... int_fast8_t,
... and so on and use it on floating point variables.
It's the same addendum, indeed.
Post by Michael Mair
Post by principiante
To ensure portability, it seems that the only solution is to force
the compiler to store each result of a single computation in
the right precision memory slot.
Once again: Portability is usually not possible. Even if you have
the same base and the same mantissa/exponent bits, the architecture
may round differently for representation or operations.
Apart from that, C demands that all operations with floats are
evaluated in at least double precision which can give you problems with
IEEE 754 conformance.
I thought it was not anymore. On page 198 of the K&R, Second Edition,
it's written that "There are two changes here. First, arithmetic on
float operands may be done in single precision, rather then double;
the first edition specified that all floating arithmetic was double
precision".
Post by Michael Mair
Usually, all people pretend that the involved floating point numbers
were IEEE 754 conforming and that there are no differences in
rounding.
BTW: The respective parts of the GSL documentation also are good to
read to get started with floating point arithmetic problems and
possible solutions.
I have seen that there is a new Floating Point standard from IEEE
that is being discussed. Are they also addressing these kinds of
problems?

http://grouper.ieee.org/groups/754/revision.html

Thanks for the very interesting discussion,

ciao

Ale
Michael Mair
2004-11-24 14:42:21 UTC
Permalink
Post by principiante
Post by Michael Mair
[snip: Demonstration of floating point excess precision "problem",
gcc options to get rid of it]
Post by principiante
Post by Michael Mair
Nonetheless, your original program and your "corrections" were both
not apt to expose the excess precision weaknesses as there were
still C issues.
Maybe there is a misunderstanding here. I think that the excess
precision is perfectly fine, the weakness stands in the interaction
between C and the excess-precision floating point engine.
Well, the problem is not exactly a C problem as the C semantics
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Post by principiante
Post by Michael Mair
work fine; the problem is that gcc breaks the C semantics on
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Post by principiante
Post by Michael Mair
purpose to cope with a broken architecture.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Post by principiante
Post by Michael Mair
C89 does not tell you anything at all about floats and doubles
apart from <FLT/DBL/LDBL>_<EPSILON/MAX/MIN> (and the derived
stuff) and we only demand 6/10/10 digits precision and exponents
ranging from 10^-37 to 10^37.
This is completely acceptable. Apart from that, the trick with
the volatile-pointer evaluation works portably.
In C99, you can find out the rounding mode, the evaluation mode
and -- if available (with gcc never) -- can be sure that IEC 60559
(IEEE 754) support is available if __STD_IEC_559__ is defined,
have access to the floating point environment and so on.
Post by principiante
Apparently, there is no way to ensure portability; if run on
a processor without excess precision, my example with the
double type would give the expected result.
Yes. Which is why you enforce this behaviour by either making
your whole environment slow when compiling or enforcing the
correct precision with a cast (or the working replacement)
where it is necessary.
Infact the cast doens't work. This was quite a surprise for me.
"if (c == a/b)" and "if (c == (float)(a/b))" give exactly
the same code.
Yes, this is what I was referring to in the "underlined" part
above. Only some version of the *((volatile float *)&variable)
will work with gcc on all platforms because volatile enforces
an evaluation of the _memory_ where 'variable' is stored which
will enforce truncation/rounding to float.
Rather ugly.
Post by principiante
Post by Michael Mair
Portability is mostly not possible across different architectures
if you do not use a language enforcing the use of IEEE floats
and doubles.
Post by principiante
See for instance
http://cch.loria.fr/documentation/IEEE754/ACM/addendum.html
which is an Addendum (by David Goldberg) of the well-known
paper What Every Computer Scientist Should Know About Floating-Point
Arithmetic, by Kahan.
I rather like the version on the Sun server where they have
another addendum suggesting to take up the C99 idea of defining
integer types like int8_t, int16_t, ..., int_least8_t, ... int_fast8_t,
... and so on and use it on floating point variables.
It's the same addendum, indeed.
You are right -- this completely escaped me as the "good parts"
(compared with http://docs.sun.com/source/806-3568/ncg_goldberg.html
[just search for 'C99']) are missing...
Post by principiante
Post by Michael Mair
Post by principiante
To ensure portability, it seems that the only solution is to force
the compiler to store each result of a single computation in
the right precision memory slot.
Once again: Portability is usually not possible. Even if you have
the same base and the same mantissa/exponent bits, the architecture
may round differently for representation or operations.
Apart from that, C demands that all operations with floats are
evaluated in at least double precision which can give you problems with
IEEE 754 conformance.
I thought it was not anymore. On page 198 of the K&R, Second Edition,
it's written that "There are two changes here. First, arithmetic on
float operands may be done in single precision, rather then double;
the first edition specified that all floating arithmetic was double
precision".
Weeeell, this is what you can read out of it. But in order to do
anything with it, you have to be able to _find_out_ and _change_ the
evaluation method.
Most implementations evaluate either in double or in long double
(you can find this out by checking FLT_EVAL_METHOD in <float.h>
if you go for C99:
[from my gcc's <float.h>]
-1 indeterminate
0 evaluate all operations and constants just to the range and
precision of the type
1 evaluate operations and constants of type float and double
to the range and precision of the double type, evaluate
long double operations and constants to the range and
precision of the long double type
2 evaluate all operations and constants to the range and
precision of the long double type
)
And there are still some situations where you are effectively
working with doubles. I remember having "fun" with the arguments
to a variadic function some years ago :-/
Post by principiante
Post by Michael Mair
Usually, all people pretend that the involved floating point numbers
were IEEE 754 conforming and that there are no differences in
rounding.
BTW: The respective parts of the GSL documentation also are good to
read to get started with floating point arithmetic problems and
possible solutions.
I have seen that there is a new Floating Point standard from IEEE
that is being discussed. Are they also addressing these kinds of
problems?
http://grouper.ieee.org/groups/754/revision.html
I have not had a close look at it yet but AFAIK they only provide
the standard -- it is up to languages or the people making the
hardware to provide support and demand/provide conforming
"implementations". Usually, the programming languages miss out on
this as they have to stay compatible. This is why I find having
a "stdfloat.h" with types flt32_t, flt64_t, flt80_t, flt128_t, ...
and flt32_IEC559_t, and so on such a charming idea.
As there is nearly no real world demand, it probably will never
happen in the PC area -- fast evaluation and give or take relative
errors in the magnitude of sqrt(DBL_EPSILON) at the end of the
computation are prefered (especially if your theoretical model
is coarse enough to easily outdo this error by several orders of
magnitude which is the case in many applications)...
Post by principiante
Thanks for the very interesting discussion,
You're welcome -- apart from that, I learned something too :-)
Before, I was not aware that gcc also gives you trouble with
floats, as (float) typecasts worked just fine for me in the
situations I have considered up to now...


Cheers
Michael
--
E-Mail: Mine is a gmx dot de address.
Bernd Strieder
2004-11-24 15:34:24 UTC
Permalink
Post by principiante
ps: just for the sake of completeness, when compiled with "gcc -O9"
the program above outputs "Equal".
With -O1 and above all calculations are optimized away. The compiled
code will be equivalent to

puts("Equal\n");

Bernd Strieder
Alessandro Russo
2004-11-24 15:38:24 UTC
Permalink
Post by Bernd Strieder
Post by principiante
ps: just for the sake of completeness, when compiled with "gcc -O9"
the program above outputs "Equal".
With -O1 and above all calculations are optimized away. The compiled
code will be equivalent to
puts("Equal\n");
Bernd Strieder
To fool the optimizer, add

double z;
z = 0;

and instead of

c = a/b;

write

c = (a+sin(z))/(b+cos(z));

The numerical values are the same, but now even with
optimization the result will be "Not Equal".

Ale
Bernd Strieder
2004-11-24 17:29:02 UTC
Permalink
Post by Alessandro Russo
Post by Bernd Strieder
Post by principiante
ps: just for the sake of completeness, when compiled with "gcc -O9"
the program above outputs "Equal".
With -O1 and above all calculations are optimized away. The compiled
code will be equivalent to
puts("Equal\n");
Bernd Strieder
To fool the optimizer, add
double z;
z = 0;
and instead of
c = a/b;
write
c = (a+sin(z))/(b+cos(z));
This is not as easy, because then we have doubles, because sin and cos
return doubles. The casting will be necessary to get back to float.

I bet there are compilers out there that could optimize sin(0) and
cos(0) away, and they would clearly be allowed to to so. A few months
ago there was a discussion about recognizing when sin and cos are
calculated with the same argument to optimize this to a fsincos
assembler command doing both at the same time. If that is possible then
recognizing sin(0) and cos(0) are possible as well. The only reason to
not include this case in the optimizer is that it is rubbish that will
not occur in reasonable code, but will make the compiler slower.
Post by Alessandro Russo
The numerical values are the same, but now even with
optimization the result will be "Not Equal".
What if the result of sin(0) is about zero, but not exactly... Then you
have a different calculation. I think it requires a bit more work to
create an example that shows the point without optimization having a
chance to influence it, and without any other doubts.

The point is that the compiler decides whithin constraints given by the
platform, by options, and by the code when to truncate a double to
float, and doing so produces surprising results. There are no means of
control about this, but some hacks in the code, or some compiler
options. The rules are mainly determined by the desire to produce fast
code by using the double precision hardware even for single precision.
Another compiler release may change the rules and some cases are
different again.

AFAIK there is a new warning about == used on floating point. In fact ==
on floating point is the root of all problems here.

Bernd Strieder
Post by Alessandro Russo
Ale
principiante
2004-11-25 15:30:05 UTC
Permalink
I try to summarize the discussion.

I started with the question of why -ffloat-store in gcc doesn't work as
I expected. After a closer inspection of the man page of gcc, I found
that...

-ffloat-store
(cut)Use -ffloat-store for such programs, after
modifying them to store all pertinent intermediate computations
into variables.

Hence it seems that I should FIRST store all partial results into
variables, and THEN use -float-store. I thought that storing
all partial results into variables was enough to avoid
extra-precision features; but maybe -ffloat-store prevents the
compiler to "optimize" floating point operations.
I really don't care very much about this anymore.

After a very fine discussion, a question for me remains open. While
we all agree that in the code

double a,b,c;
a=1.0;
b=3.0;
c=a/b;

the test c == a/b fails for the extra-precision, it seems that instead
c == (double)(a/b) should be true. But for gcc (no option) on x86 it
fails.

Is this a gcc bug? Or since a/b is in some sense already a double,
the cast (double) should have no effect?

Thanks,

8-P
Maurizio Loreti
2004-11-25 16:33:31 UTC
Permalink
[SNIP]
...
double a,b,c;
a=1.0;
b=3.0;
c=a/b;
the test c == a/b fails for the extra-precision, it seems that instead
c == (double)(a/b) should be true. But for gcc (no option) on x86 it
fails.
Is this a gcc bug? Or since a/b is in some sense already a double,
the cast (double) should have no effect?
***@mlinux 11 $ cat foo.c
#include <stdio.h>
int main() { double a=1.0, b=3.0, c=a/b;
puts( c == a/b ? "YUCK!" : "DAMMIT");
return 0;
}
***@mlinux 12 $ gcc -O -o foo foo.c && ./foo
YUCK!
***@mlinux 13 $ gcc --version
gcc (GCC) 3.4.3
Copyright (C) 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

***@mlinux 14 $
--
Maurizio Loreti http://www.pd.infn.it/~loreti/mlo.html
Dept. of Physics, Univ. of Padova, Italy ROT13: ***@cq.vasa.vg
principiante
2004-11-25 16:56:03 UTC
Permalink
Post by Michael Mair
[SNIP]
...
double a,b,c;
a=1.0;
b=3.0;
c=a/b;
the test c == a/b fails for the extra-precision, it seems that instead
c == (double)(a/b) should be true. But for gcc (no option) on x86 it
fails.
Is this a gcc bug? Or since a/b is in some sense already a double,
the cast (double) should have no effect?
#include <stdio.h>
int main() { double a=1.0, b=3.0, c=a/b;
puts( c == a/b ? "YUCK!" : "DAMMIT");
return 0;
}
YUCK!
gcc (GCC) 3.4.3
Copyright (C) 2004 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# gcc -o foo foo.c && ./foo
DAMMIT

Remove -O and it will be DAMMIT.

However, optimization has no role here. If you try the following:

#include <stdio.h>
#include <math.h>
int main() { double a=1.0, b=3.0, z=0.0, c=(a+sin(z))/(b+z);
puts( c == (a+sin(z))/(b+z) ? "YUCK!" : "DAMMIT");
return 0;
}

the answer will be DAMMIT regardless of optimization. Optimization has
no role here.

A.
Michael Mair
2004-11-25 17:25:56 UTC
Permalink
Post by principiante
After a very fine discussion, a question for me remains open. While
we all agree that in the code
double a,b,c;
a=1.0;
b=3.0;
c=a/b;
the test c == a/b fails for the extra-precision, it seems that instead
c == (double)(a/b) should be true. But for gcc (no option) on x86 it
fails.
Is this a gcc bug? Or since a/b is in some sense already a double,
the cast (double) should have no effect?
It is a gcc bug. If gcc would conform to either C89 or C99, then
it would not do this.
However, as the difference in speed between this buggy behaviour and
crippling the floating point arithmetic of the architecture to
reach conforming behaviour is tremendous, most people accept this bug
as a feature.

This would be more fair if there was a _simple_, _obvious_ switch
to ensure conforming behaviour and gcc would print a warning when
using, say, -std=c89 -pedantic that you do not reach full conformance
if you do not also use -CrippleFPToCStandard or some such.
At least, this should be mentioned along with the -std=... option
in the documentation.


Cheers
Michael
--
E-Mail: Mine is a gmx dot de address.
Alessandro Russo
2004-11-26 08:14:58 UTC
Permalink
Post by Michael Mair
Post by principiante
After a very fine discussion, a question for me remains open. While
we all agree that in the code
double a,b,c;
a=1.0;
b=3.0;
c=a/b;
the test c == a/b fails for the extra-precision, it seems that instead
c == (double)(a/b) should be true. But for gcc (no option) on x86 it
fails.
Is this a gcc bug? Or since a/b is in some sense already a double,
the cast (double) should have no effect?
It is a gcc bug.
After a long discussion on it.comp.lang.c about a related issue (but
the discussions have converged both to the same topic) this link
showed up:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=14708

which seems to confirm that it's really a gcc bug.

Thanks to everybody for attention.

A. Russo

--
russo at matapp dot unimib dot it

Loading...