Discussion:
16 byte long double with only 10 byte precision?
(too old to reply)
Pavel Pokorny
2005-11-24 09:00:23 UTC
Permalink
Dear gcc friends,

can you, please, help me
to use a 16 byte long double precision (35 decimal digits)?
It looks like a 10 byte (18 digits) precision
on my AMD Opteron hp xw9300 workstation
although sizeof reports 16 bytes!

#include<stdio.h>
int main()
{
int i;
long double x,dx=0.1,x0=2;

for (i=1;i<20;i++){
dx = dx/10;
x = x0 + dx;
x = x - x0;
(void) printf ("i=%d x=%LG\n",i,x);
};
(void) printf (" sizeof(x) = %d \n", (int)sizeof(x));
}

i=1 x=0.01
i=2 x=0.001
i=3 x=0.0001
i=4 x=1E-05
i=5 x=1E-06
i=6 x=1E-07
i=7 x=1E-08
i=8 x=1E-09
i=9 x=1E-10
i=10 x=1E-11
i=11 x=1E-12
i=12 x=1E-13
i=13 x=1E-14
i=14 x=1.00007E-15
i=15 x=9.99634E-17
i=16 x=9.97466E-18
i=17 x=1.0842E-18
i=18 x=0
i=19 x=0
sizeof(x) = 16

gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2)
Linux 2.6.9-22.EL #1 Mon Sep 19 17:49:49 EDT 2005 x86_64 x86_64 x86_64

Thanks for any advice.
--
Pavel Pokorny
Math Dept, Prague Institute of Chemical Technology
http://www.vscht.cz/mat/Pavel.Pokorny
Larry I Smith
2005-11-24 14:20:31 UTC
Permalink
Post by Pavel Pokorny
Dear gcc friends,
can you, please, help me
to use a 16 byte long double precision (35 decimal digits)?
It looks like a 10 byte (18 digits) precision
on my AMD Opteron hp xw9300 workstation
although sizeof reports 16 bytes!
#include<stdio.h>
int main()
{
int i;
long double x,dx=0.1,x0=2;
for (i=1;i<20;i++){
dx = dx/10;
x = x0 + dx;
x = x - x0;
(void) printf ("i=%d x=%LG\n",i,x);
};
(void) printf (" sizeof(x) = %d \n", (int)sizeof(x));
}
i=1 x=0.01
i=2 x=0.001
i=3 x=0.0001
i=4 x=1E-05
i=5 x=1E-06
i=6 x=1E-07
i=7 x=1E-08
i=8 x=1E-09
i=9 x=1E-10
i=10 x=1E-11
i=11 x=1E-12
i=12 x=1E-13
i=13 x=1E-14
i=14 x=1.00007E-15
i=15 x=9.99634E-17
i=16 x=9.97466E-18
i=17 x=1.0842E-18
i=18 x=0
i=19 x=0
sizeof(x) = 16
gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2)
Linux 2.6.9-22.EL #1 Mon Sep 19 17:49:49 EDT 2005 x86_64 x86_64 x86_64
Thanks for any advice.
If the precision is not specified in the printf format string,
then it defaults to 6; see 'man -S3 printf' for details.
For example, to get a precision of 30 use "%.30LG", e.g.:

printf ("i=%d x=%.30LG\n",i,x);

Regards,
Larry
Michael Mair
2005-11-24 22:31:33 UTC
Permalink
Post by Larry I Smith
Post by Pavel Pokorny
Dear gcc friends,
can you, please, help me
to use a 16 byte long double precision (35 decimal digits)?
It looks like a 10 byte (18 digits) precision
on my AMD Opteron hp xw9300 workstation
although sizeof reports 16 bytes!
#include<stdio.h>
int main()
{
int i;
long double x,dx=0.1,x0=2;
for (i=1;i<20;i++){
dx = dx/10;
x = x0 + dx;
x = x - x0;
(void) printf ("i=%d x=%LG\n",i,x);
};
(void) printf (" sizeof(x) = %d \n", (int)sizeof(x));
}
i=1 x=0.01
i=2 x=0.001
i=3 x=0.0001
i=4 x=1E-05
i=5 x=1E-06
i=6 x=1E-07
i=7 x=1E-08
i=8 x=1E-09
i=9 x=1E-10
i=10 x=1E-11
i=11 x=1E-12
i=12 x=1E-13
i=13 x=1E-14
i=14 x=1.00007E-15
i=15 x=9.99634E-17
i=16 x=9.97466E-18
i=17 x=1.0842E-18
i=18 x=0
i=19 x=0
sizeof(x) = 16
gcc (GCC) 3.4.4 20050721 (Red Hat 3.4.4-2)
Linux 2.6.9-22.EL #1 Mon Sep 19 17:49:49 EDT 2005 x86_64 x86_64 x86_64
Thanks for any advice.
If the precision is not specified in the printf format string,
then it defaults to 6; see 'man -S3 printf' for details.
printf ("i=%d x=%.30LG\n",i,x);
Apart from that: In <float.h>, the numerical limits introduced
by the C Standard are specified. So, you can ask for the number
of mantissa bits, effective decimal digits etc.

Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Marco Manfredini
2005-11-25 12:07:16 UTC
Permalink
Post by Pavel Pokorny
Dear gcc friends,
can you, please, help me
to use a 16 byte long double precision (35 decimal digits)?
It looks like a 10 byte (18 digits) precision
on my AMD Opteron hp xw9300 workstation
although sizeof reports 16 bytes!
On x86 platforms, 'long double' maps to the internal extended floating
point format of the numerical coprocessor, which is 80 bits long. There
is no hardware support for 128-bit floating points.
However, accessing an 80-bit value in memory on a non-16 byte boundary
is very costly and therefore the 'long doubles' have been made 16 byte
long to give them proper aligning.
Michael Mair
2005-11-25 18:00:58 UTC
Permalink
Post by Marco Manfredini
Post by Pavel Pokorny
Dear gcc friends,
can you, please, help me
to use a 16 byte long double precision (35 decimal digits)?
It looks like a 10 byte (18 digits) precision
on my AMD Opteron hp xw9300 workstation
although sizeof reports 16 bytes!
On x86 platforms, 'long double' maps to the internal extended floating
point format of the numerical coprocessor, which is 80 bits long. There
is no hardware support for 128-bit floating points.
However, accessing an 80-bit value in memory on a non-16 byte boundary
is very costly and therefore the 'long doubles' have been made 16 byte
long to give them proper aligning.
I can't tell for your platform/OS/gcc combination but in my case,
I find a 4 byte alignment:

***@omexochitl ~/test/C
$ cat ld.c
#include <stdio.h>
#include <limits.h>

int main (void)
{
unsigned long bytes = sizeof (long double);
printf("%lu bytes, %lu bits\n",
bytes,
(unsigned long)(CHAR_BIT * bytes));

return 0;
}

***@omexochitl ~/test/C
$ gcc -ansi -pedantic -W -Wall -O ld.c -o ld

***@omexochitl ~/test/C
$ ./ld
12 bytes, 96 bits


Cheers
Michael
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Marco Manfredini
2005-11-29 12:19:24 UTC
Permalink
Post by Michael Mair
Post by Marco Manfredini
Post by Pavel Pokorny
It looks like a 10 byte (18 digits) precision
on my AMD Opteron hp xw9300 workstation
although sizeof reports 16 bytes!
On x86 platforms, 'long double' maps to the internal extended
s/x86/x86_64/g
Post by Michael Mair
I can't tell for your platform/OS/gcc combination but in my case,
$ ./ld
12 bytes, 96 bits
see:
$ info --index-search="m128bit-long-double" gcc

for details.
Michael Mair
2005-11-30 18:08:09 UTC
Permalink
Post by Marco Manfredini
Post by Michael Mair
Post by Marco Manfredini
Post by Pavel Pokorny
It looks like a 10 byte (18 digits) precision
on my AMD Opteron hp xw9300 workstation
although sizeof reports 16 bytes!
On x86 platforms, 'long double' maps to the internal extended
s/x86/x86_64/g
Post by Michael Mair
I can't tell for your platform/OS/gcc combination but in my case,
$ ./ld
12 bytes, 96 bits
$ info --index-search="m128bit-long-double" gcc
for details.
Thanks :-)
--
E-Mail: Mine is an /at/ gmx /dot/ de address.
Pavel Pokorny
2005-11-28 12:36:16 UTC
Permalink
Post by Marco Manfredini
Post by Pavel Pokorny
Dear gcc friends,
can you, please, help me
to use a 16 byte long double precision (35 decimal digits)?
It looks like a 10 byte (18 digits) precision
on my AMD Opteron hp xw9300 workstation
although sizeof reports 16 bytes!
On x86 platforms, 'long double' maps to the internal extended floating
point format of the numerical coprocessor, which is 80 bits long. There
is no hardware support for 128-bit floating points.
However, accessing an 80-bit value in memory on a non-16 byte boundary
is very costly and therefore the 'long doubles' have been made 16 byte
long to give them proper aligning.
I have seen at least the software support of 16-byte long double
arithmetics (+-*/) in HP-UX and IRIX. Is there something similar
for Linux on a 64-bit CPU?
When the values are already stored in 16 bytes anyway.

Thanks for any help
--
Pavel Pokorny
Math Dept, Prague Institute of Chemical Technology
http://www.vscht.cz/mat/Pavel.Pokorny
a***@alex.org.uk
2005-11-27 16:49:49 UTC
Permalink
Have you looked at the following text from the manpage? (at least for
4.0.2) - particularly the para starting "Notice".

Alex


-m96bit-long-double
-m128bit-long-double

These switches control the size of "long double" type. The i386
application binary interface specifies the size to be 96 bits, so
-m96bit-long-double is the default in 32 bit mode.

Modern architectures (Pentium and newer) would prefer "long double" to
be aligned to an 8 or 16 byte boundary. In arrays or structures
conforming to the ABI, this would not be possible. So specifying a
-m128bit-long-double will align "long double" to a 16 byte boundary by
padding the "long double" with an additional 32 bit zero.

In the x86-64 compiler, -m128bit-long-double is the default choice as
its ABI specifies that "long double" is to be aligned on 16 byte
boundary.

Notice that neither of these options enable any extra precision over
the x87 standard of 80 bits for a "long double".

Warning: if you override the default value for your target ABI, the
structures and arrays containing "long double" variables will change
their size as well as function calling convention for function taking
"long double" will be modified. Hence they will not be binary
compatible with arrays or structures in code compiled without that
switch.
Loading...