80-bit long double trouble

80-bit long double trouble

imagem de firespot71

Hi,

Compiling a simple application with /Qlong-double (IA32, 12.1, integrated in MSVC on Win7 64-bit) and the two code lines below gives me a linker error on using std::numeric_limits<> when linking to multi-threaded debug DLLs (/MDd). Everything is fine if I link statically (/MTd). I suppose this is not intentional?

typedef long double real_type;
std::cout << std::numeric_limits::epsilon();

error LNK2019: unresolved external symbol "__declspec(dllimport) public: static UNKNOWN __cdecl std::numeric_limits::epsilon(void)" (__imp_?epsilon@?$numeric_limits@_T@std@@SA_TXZ)

Moreover, when using operator >> to read-in values with a long double variable as target, it reads rubbish or crashes.

real_type x;
std::cin >> x;
std::cout << x;

Entering '2.2' as input gives '-5.1488e-247' as output, which is quite different. I am not linking to any other file. Whether Intel links to a wrong MSVC-lib or not, I don't know, but I suppose Intel should get it right to linking to the correct lib as it has all the relevant info hany. Otherwise what runtime libs do I need to specify?

And finally: If linking to boost libraries, I strongly suppose that boost libraries must be built with the /Qlong-double option on to ensure binary compatibility - is that correct?

Any help appreciated !
Thanks.

45 posts / 0 new
Último post
Para obter mais informações sobre otimizações de compiladores, consulte Aviso sobre otimizações.
imagem de Georg Zitzlsberger (Intel)

Hello,

AFAIK 80 bit long double support is not really supported in Microsoft Libraries - only very limited support is available:
http://msdn.microsoft.com/de-de/library/9cx8xs15.aspx

Please also refer to this thread here:
http://software.intel.com/en-us/forums/showthread.php?t=105429

Best regards,

Georg Zitzlsberger

imagem de Sergey Kostrov
Quoting Georg Zitzlsberger (Intel) AFAIK 80 bit long double support is not really supported in Microsoft Libraries...

Georg,

The problem isnot related to 80-bit precision of the 'long double' type. User'firespot71' could not link his test-case and
it is related to some problem with STL. I'll follow up with more technical details.

Best regards,
Sergey

imagem de Sergey Kostrov
Quoting firespot71 Any help appreciated !

In order tounderstand what is wrong you need to do independent verifications with CRT functions, like printf andscanf.

Here a couple of test-cases:

>> Test-Case #1 <<
...
int iMaxValue = std::numeric_limits< int >::max();
int fMaxValue = std::numeric_limits::max();
int dMaxValue = std::numeric_limits::max();
int ldMaxValue = std::numeric_limits::max();
...

>> Test-Case #2 <<
...
unsigned int uiControlWordx87 = 0UL;

//uiControlWordx87 = _control87( _PC_24, _MCW_PC );
//uiControlWordx87 = _control87( _PC_53, _MCW_PC );
//uiControlWordx87 = _control87( _PC_64, _MCW_PC );
uiControlWordx87 = _control87( _CW_DEFAULT, _MCW_PC );

printf( "Epsilon for float : %.16f\n", numeric_limits< float >::epsilon() );
printf( "Epsilon for double : %.32f\n", numeric_limits< double >::epsilon() );
printf( "Epsilon for long double : %.32f\n", numeric_limits< long double >::epsilon() );
...

>> Test-Case #3 ( Yourmodified test) <<
...
typedef long double real_type;

std::cout << "Test 1 - Epsilon for 'long double': " << numeric_limits< real_type >::epsilon() << endl;
printf( "Test 2 - Epsilon for 'long double': %.21f\n", numeric_limits< real_type >::epsilon() );

real_type x;

std::cout << "Enter a floating-point value: ";
std::cin >> x;

std::cout << x << endl;
printf( "Test 3 - Value for 'long double': %.21f\n", x );
...

imagem de Sergey Kostrov
Quoting Sergey Kostrov
...
>> Test-Case #3 ( Yourmodified test) <<
...
typedef long double real_type;

std::cout << "Test 1 - Epsilon for 'long double': " << numeric_limits< real_type >::epsilon() << endl;
printf( "Test 2 - Epsilon for 'long double': %.21f\n", numeric_limits< real_type >::epsilon() );

real_type x;

std::cout << "Enter a floating-point value: ";
std::cin >> x;

std::cout << x << endl;
printf( "Test 3 - Value for 'long double': %.21f\n", x );
...

Here is output with Microsoft C++ compiler ( Visual Studio 2005 ):
...
Test 1 - Epsilon for 'long double': 2.22045e-016
Test 2 - Epsilon for 'long double': 0.000000000000000222045
Enter a floating-point value: 1.234567890
1.23457
Test 3 - Value for 'long double': 1.234567889999999900000
...

I don't see any problems.

Best regards,
Sergey

imagem de firespot71

Not working here.

Epsilon for long double: -1.#QNAN000000000000000000000000000
Test 1 - Epsilon for 'long double': -0
Test 2 - Epsilon for 'long double': 0.000000000000000000000
Enter a floating point-value: 2.2
-5.1488e-247
Test 3 - Value for 'long double': 0.000000000000000000000

This applies to debug mode (setting up a new console project app on MSVC9, leaving options by default and using C++ Compiler XE 12.1.5.344 (IA-32); the command line is:
/c /Od /D "WIN32" /D "_DEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /EHsc /RTC1 /MTd /GS /fp:fast /Fo"Debug/" /Fd"Debug/vc90.pdb" /W3 /nologo /ZI /Qlong-double

If I compile in release mode the command line is
/c /O2 /Oi /Qipo /D "WIN32" /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /EHsc /MD /GS /Gy /fp:fast /Fo"Release/" /Fd"Release/vc90.pdb" /W3 /nologo /Zi /Qlong-double
and results get even weirder as the first printf of numeric_limits works, Test 1 just outputs "-0", Test 2 produces a a ridiculously large ouput of rubbish digits spanning over 4 console lines, then cin and cout do work and output the correct number, and Test 3 outputes again ridiculously large rubbish of over four lines.

So ... ???

imagem de iliyapolak

Maybe 'printf' is not able to properly format anddisplay long double values because of limited supportof 80-bit precision inMSVCRT library.As stated in MSDN articlelong double values are mapped to double 64-bitargumentsand somehow this mapping is not performed probably by 'printf' function.
As always the best option to understand what is going under the hood could be a reversing of library printf and scanf functions coupled with dynamic analysis under debugger.

imagem de Tim Prince

It's a bit strange that you would set /fp:fast when trying to reconcile Microsoft's treatment of long double against extended precision. As iliyapolak said, there is no support for long double in Microsoft printf or cout, so you will need to store all values in a std double to send to printf. You must change the contents of numeric_limits if you want to work this way; I don't remember you showing us what you did there.

imagem de firespot71

The /fp:fast was just left as by default for the testing purpose (this was merely a quickly set up dummy app to illustrate the problem!). Changing it to precise modes does not affect things.

It is unclear what you mean by changing contents numeric_limits. That's a std component and I don't intend to change there anything at all (I presume the implementation provides it to me as it must if conforming; indeed IIRC the standard I am not allowed to modify anything in namespace std).

Good, let's tackle this form a different, practical perspective:

Why does the debug DLL not provide the template specialization for long double, but the static does?

I am using only std C++. Why does the Intel compiler simply link to an MSVC runtime library that seems to be binary incompatible as you suggest? There are various alternatives (the best being simply that intel provides correct libs themselves, or provides a wrapper that converts long double to double prior to invoking msvc libs, to issuing a compile-time error if incompatibilities are detected, or simply disallowing the /Qlong-double option for Windoes at all). Frankly, just messing things up at runtime is not a particular good solution.

How am I supposed to know which parts of std C++ I may use and which not? That printf relies on some precompiled libs makes sense and is not difficult to guess. That std::numeric_limits seems to rely on some lib (see debug DLL issue) is much harder to guess as it could be easily implemented throughout a a plain header file. So using std functionalities seems risky whether they work or not. But how would I know whether ordinary maths ops like exp,log, or taking it to the extreme actually also even a plain + or *, would work correctly? After all, these could also link to an incompatible lib?

In summary, ss there any somewhat reliable use for long double on Windows at all?

thanks!

imagem de Georg Zitzlsberger (Intel)

Hello,

we're re-using the system libraries from the different platforms for increased compatibility. It's extremely hard (practically almost impossible) to provide an own implementation with the very same semantics; also deviations from the standard & bugs need to be "emulated". That's tedious and expensive... and the benefit?

The downside is, however, that we depend on 3rd party implementations that might cause some head-scratching in rare cases. Apparently you found one of those.

Only Microsoft can answer the question why static libraries work here but DLLs don't. I guess they implemented it half-ways for some kind of internal testing... or realized that there were some glitches with DLLs and stopped any further implementation. Anyways, it's not there and won't ever be.

Bottom line is that even Microsoft guarantees the use of "long double" with a limited set of functions only (see my link above).

Why is it still there... the option?
Well, if you're writing a numerical library and really need 80 bit precision (internally) you still can do it. The option "/Qlong-double" is for the very few who implement such libraries.
We've lots of options where you can "shoot yourself into your own foot" if you're not exactly knowing the details behind the scenes. We provide them to provide maximum flexibility. They're not meant for day-to-day use.

Btw.: In our latest documentation "/Qlong-double" is not prominently documented (anymore). Only found two examples were we still use it for good reason.

I hope I clarified your concerns.

Best regards,

Georg Zitzlsberger

imagem de Sergey Kostrov
Quoting firespot71 ...
the command line is:
/c /Od /D "WIN32" /D "_DEBUG" /D "_CONSOLE" /D "_UNICODE" /D "
UNICODE" /EHsc /RTC1 /MTd /GS /fp:fast /Fo"Debug/" /Fd"Debug/vc90.pdb" /W3 /nologo /ZI /Qlong-double
...

Regarding a linking error. Please try to compile with '_MBCS' / 'MBCS'macrosinstead of '_UNICODE' / 'UNICODE' macros. I had lots of
similar problems with STL and new C++ operators when I was trying to compile with 'UNICODE' defined.

Best regards,
Sergey

imagem de firespot71

Hi,

The _MBCS / MBCS does not solve the linker problem, unfortunately.

OK Georg I understand you by now, so the functions stated in your link are the _only_ ones that I can expect to safely use with 80-bit long double out of everything that is provided in the whole of Standard C++ (I assume I may, of course, also use built-in operators such as +,-,*,/, <, >= etc.); for anything else all bets are off. Hm, quite restrictive.

cheers

imagem de Sergey Kostrov

Here is another set of results from MinGW and Borland C++ compilers:

Application - MgwTestApp - WIN32_MGW
Tests: Start
> Test1017 Start <
Sub-Test 48
Test 1 - Epsilon for 'long double': 1.0842e-019
Test 2 - Epsilon for 'long double': 0.000000000000000000000
Enter a floating-point value: 1.234567890
1.23457
Test 3 - Value for 'long double': -0.000000000000000000000
> Test1017 End <
Tests: Completed

Application - BccTestApp - WIN32_BCC
Tests: Start
> Test1017 Start <
Sub-Test 48
Test 1 - Epsilon for 'long double': 1.0842e-19
Test 2 - Epsilon for 'long double': -0.000000000000000000000
Enter a floating-point value: 1.234567890
1.23457
Test 3 - Value for 'long double': -0.000000000000000000000
> Test1017 End <
Tests: Completed

As you can see in both casesthe C++ operator >> returned a correct roundedvalue'1.23457'.

Sorry guys, butI don't understand the point of all these fuzzy explanations from Georg. Three C++ compilers,
that is Microsoft, MinGW and Borland,passed the test and only Intel C++ compiler failed.

Best regards,
Sergey

imagem de Georg Zitzlsberger (Intel)

Hello Sergey,

did you check the size of "long double"? It's still 64 bit for the Microsoft Visual Studio* compiler (for the others I assume that's also true). AFAIK you cannot change it to 80 bit unless you're using some old 16 bit versions...?

From the Microsoft Visual Studio* documentation:
Type longdouble is a floating type that is equal to type double.

So, what you're testing is 64 bit FP (type "double").

Best regards,

Georg Zitzlsberger

imagem de Sergey Kostrov
Hi Georg,

Quoting Georg Zitzlsberger (Intel) Hello Sergey,

did you check the size of "long double"?...

Yes, I did and I'll provide you with a report for 4 different C++ compilers. However, the problem is not related to
the double-precision data type 'long double'. There is a linker error described in the 1st post of the thread and by some
reason you're ignoring this. Am I wrong? Also, 'firespot71' had a problem with the C++ operator >> and you're ignoring this and
pressing that the problem is related to the 'long double'.

Georg, did you try to reproduce it? If Yes,did you try tostep into >> C++ operator in order to understand why some wrong value is returned?
Unfortunately, I can't explain what is wrong withthat really simple test case.

Guys, you're trying to blame Microsoft without a completedinvestigation on your side. Please, take a look at it and I appreciate your feedback.

I'll follow up some time later because we had apower outage and everybody is busy with recovering some lost pieces of data.

Best regards,
Sergey

imagem de Georg Zitzlsberger (Intel)

Hello Sergey,

it works well if you omit the option "/Qlong-double". Then data types of "long double" will be 64 bit (double precision), same as for current Microsoft Visual Studio* compilers (and the others). That's the standard on the Windows* platform.

The option "/Qlong-double" is only available for the Intel C++ Compiler and not (well) documented. It extends the size of type "long double" to 80 bit (extended double precision) and hence conflicts with the ABI of existing libraries (some exceptions, though). That's why there are linker errors and your FP variables are not printed correctly.
The use of this option is neither required for using "long double" types, nor should it be used in normal applications. It's something like the options for changing the calling convention - they for sure can break things but can also be useful in very rare cases.

Hence, don't use "/Qlong-double" unless you have very good reasons.

Best regards,

Georg Zitzlsberger

imagem de Tim Prince

In particular, as you indicated your intention to use the unmodified Microsoft STL headers which support only 64-bit (53-bit precision) long double, you should heed Georg's advice.

imagem de iliyapolak

IIRC cin and cout operators are linked against MSVCRT or MSVCP library which does not support long double precision.
So this could be a reason for linker generated errors.

imagem de Sergey Kostrov
Hi Iliya,

Quoting iliyapolak IIRC cin and cout operators are linked against MSVCRT or MSVCP library which does not support long double precision. So this could be a reason for linker generated errors.

I verified it and please take a look at aPost #4.

Here is output with Microsoft C++ compiler ( Visual Studio 2005 ):
...
Test 1 - Epsilon for 'long double': 2.22045e-016
Test 2 - Epsilon for 'long double': 0.000000000000000222045
Enter a floating-point value: 1.234567890
1.23457
Test 3 - Value for 'long double': 1.234567889999999900000
...

Best regards,
Sergey

imagem de Sergey Kostrov
Hi Georg,

Quoting Georg Zitzlsberger (Intel) ...
it works well if you omit the option "/Qlong-double". Then data types of "long double" will be 64 bit (double precision), same as for current Microsoft Visual Studio* compilers (and the others). That's the standard on the Windows* platform.

[SergeyK] Thank you for the explanations.It would be nice toinvestigatewhy it createsthat problem. Aren'tIntel Software Engineersinterested in that?
I simply wanted to tell that if there is some problem with Intel C++ compiler when the option "/Qlong-double" is used and
it is not fixed or disabled completelythat"bad-piece-of-codes"looks like a "time bomb".

Best regards,
Sergey

imagem de Sergey Kostrov
Hereare a couple of more things...

Quoting firespot71 ...
typedef long double real_type;
std::cout << std::numeric_limits::epsilon();

error LNK2019: unresolved external symbol "__declspec(dllimport) public: static UNKNOWN __cdecl std::numeric_limits<UNKNOWN>::epsilon(void)" (__imp_?epsilon@?$numeric_limits@_T@std@@SA_TXZ)

I've looked at 'limits' header file ( VS 2005 / \VC\Include folder )and this is how'numeric_limits' isdeclared:

...

// TEMPLATE CLASS numeric_limits

template

	class numeric_limits

		: public _Num_base

	{	// numeric limits for arbitrary type _Ty (say little or nothing)

public:

	...

	static _Ty __CRTDECL epsilon() _THROW0()

		{	// return smallest effective increment from 1.0

		return (_Ty(0));

		}

	...

};

...

My question is: Why Intel C++ compileruses 'UNKNOWN' data type when the 'real_type' ( 'long double' ) is explicitly declared?

Also, take a look inside of 'limits' header file and you will see thattemplate classesforthe following data types are declared:

...
// CLASS numeric_limits
// CLASS numeric_limits
// CLASS numeric_limits<_Bool>
// CLASS numeric_limits
// CLASS numeric_limits
// CLASS numeric_limits
// CLASS numeric_limits
// CLASS numeric_limits
// CLASS numeric_limits
// CLASS numeric_limits
// CLASS numeric_limits
// CLASS numeric_limits<_LONGLONG>
// CLASS numeric_limits<_ULONGLONG>
// CLASS numeric_limits
// CLASS numeric_limits
// CLASS numeric_limits
...

including 'long double' data type. Of course, there will bea linker error because there is nodeclaration for atemplate class 'numeric_limits'. Isn't that true?

Best regards,
Sergey

imagem de iliyapolak

Here is output with Microsoft C++ compiler ( Visual Studio 2005 ):

I'm aware that every compiler will be able tooperate on long double value as faras it is supported by the CPU(in our case X87 FPU with its 80-bit register file).
In order to fully understand how C++ cin and cout are implemented in Windows runtime libraries some kind of investigation must be performed.At least two subsystem dll's will be present and loaded into application address space:KERNEL32.DLL and USER32.DLL first of them is responsible for standard I/Oand is also imported from runtime library.
Bylooking at your test-case I can see that'cin' , 'cou't and 'printf' all of them are able to work with long double values so it could be asome problem related to Intel compiler?

imagem de Sergey Kostrov
Quoting iliyapolak ...Bylooking at your test-case I can see that'cin' , 'cou't and 'printf' all of them are able to work with long double values so it could be asome problem related to Intel compiler?

It was confirmed by Georgsome time ago. Just for the sake of investigation theseare sizes for 'long double'for different C++ compilers:

MSC/Intel C++ compilers - long double  -  8

MinGW C++ compiler      - long double  -  12

Borland C++ compiler    - long double  -  10

Turbo C++ compiler      - long double  -  10

imagem de firespot71

If long double has sizeof(8) then as far as I know for both MSVC / Intel it is sort of equivalent to double. Sort of equivalent here means that they are different types in type matching (e.g. in template argument deductions; for example you could not invoke std::max(A, B) if A is double and B is long double, as the types mismatch) but binary implementation is identical and therefore should yield identical execution.

I suppose you have declared the types as long double but for Intel compiled without the /Qlong-double option; unless you set this option, by default long double maps to to the double type as outlined above. Therefore your test cases pass smoothly.

For Intel you can force a larger binary representation (80-bit, like your Borland and Turbo) by setting /Qlong-double. Then the types are different with respect to both type matching and binary representation (sizeof should be 10), and the whole trouble starts and I strongly guess that your test cases would not pass any more.

For MSVC there is, AFAIK, no possibility to forcing long-double to anything else than the double implementation. So you never run into compatibility problems, but also never get more than 64-bit (memory) precision. Your tests should always work.

imagem de Sergey Kostrov
Quoting firespot71 ...So you never run into compatibility problems, but also never get more than 64-bit (memory) precision...

It can't be more than a53-bit precision for the double-precision data types ( doubleor long double ).

Do you still have that linking problem?

imagem de iliyapolak

MSC/IntelC++compilers-longdouble-8

So simply MS compilers are truncating precision of declared long double to 53-bit precision of double.

imagem de firespot71
Quoting Sergey Kostrov
It can't be more than a53-bit precision for the double-precision data types ( doubleor long double ).

Do you still have that linking problem?

Precision should be implementation-dependent and I don't think an upper boundary is specified; although I don't know what Intel uses for 80-bit long double types, I'd strongly guess the mantissa takes more bits than for double. Anyway I was not expressing myself properly, I wanted to refer to the total number of bits comprising the data type (sign + mantissa + exponent).

Linker errors go away if I don't set the /Qlong-double option (as do all other errors); whether that is because the object files do contain proper definition for 64-bit long double or at some stage of the compilation / linking double and long double are set to being identical types if both share the same representation, or some other reason, I don't know. But as the DLLs don't link while static does, I don't think the second item of my list applies.

cheers

imagem de Tim Prince

I think we've beat this to death, but the documentation of /Qlong-double states (briefly) that the option breaks compatibility with Microsoft headers and libraries, on which ICL depends. This should be self-evident if you consider that Microsoft doesn't support 64-bit precision mode. You are over-riding Microsoft's required initialization to 53-bit precision mode, and use of SSE registers in /arch:SSE2 modes (including all X64 usage). If you choose to make the distinction between double and long double which Microsoft doesn't support, you must avoid mixing long double with Microsoft headers and functions compiled by MSVC (including libraries).

imagem de Sergey Kostrov
Quoting iliyapolak

MSC/IntelC++compilers-longdouble-8

So simply MS compilers are truncating precision of declared long double to 53-bit precision of double.

Yes, that is correct and I was very surprised to see that. There is no any reason to use'long double' type in some application orlibrary
since 8 bytes are allocated for these data types.

For example, OpenGL designers / developersdidn'tdeclare 'long double' type at all:

>> GL.h<<
...
typedef float GLfloat;
typedef float GLclampf;
typedef double GLdouble;
typedef double GLclampd;
...

imagem de firespot71
Quoting Sergey Kostrov

Yes, that is correct and I was very surprised to see that. There is no any reason to use'long double' type in some application orlibrary since 8 bytes are allocated for these data types.

For example, OpenGL designers / developersdidn'tdeclare 'long double' type at all:

Well I'd say it still makes sense in general but subject to practical constraints of course. For example one could do it as i) present code might be compiled with a compiler generally (fully) supporting > 8 bytes for this type (e.g. GCC family, or AFAIK also Intel on Linux), or ii) anticipating possible changes in future versions of MSVC / Intel-Win defaults, or iii) as Georg has pointed out, restricting long double use on Intel-Win with /Qlong-double specified to those functions supporting it. Indeed I have tried out the latter by doing precisely that an applying it in purely numerical sections where extra precision might matter. Conclusion: It does work without problems (except of needing to link statically, of course) yet in my case the performance penalty did not justify the virtually non-existant extra precision (in how far that penalty is due to more complex numerical routines invoked or the many casts between double and long double, I don't know). Other applications might draw differenct conclusions of course.

Note that technically MSVC and Intel's default option of 64-bit long doubles yet ar compliant with the C++ Standard as the long double requirements are fulfilled. Still I wished both would offer a fully compliant larger type and thus provide greater choice, but that's a differnt story.

Does OpenGL support long double on say GCC platforms? Otherwise, I suspect this might simply be due to standard graphics hardware not supporting floationg-point ops for > 64 bits (this is for sure not my field of expertise but are there any 'ordinary' customer graphics card out which do support standard > 64 bit calculations?)

imagem de Sergey Kostrov
Quoting firespot71 ...Does OpenGL support long double on say GCC platforms?..

No. OpenGL is a highly portable library andall type definitions are the same.For example, if some TypeA is supported
onPlatformA it is also supported on all the rest Platforms. If some TypeX is notsupported onPlatformA it is also notsupported
on all the rest Platforms.

imagem de iliyapolak

For example, OpenGL designers / developersdidn'tdeclare 'long double' type at all:

Rendering API like OpenGL and DirectX do not need high precision long double primitives.
Display hardware is not capable to operate on more than 14-bit per channel RGBA vectors.
So you do not need 63-bit long double precision per channelto accurately describe more life-like colour or brightnessfields.

imagem de Sergey Kostrov
Hi Iliya,

Quoting iliyapolak

For example, OpenGL designers / developersdidn'tdeclare 'long double' type at all:

Rendering API like OpenGL and DirectX do not need high precision long double primitives...

Some time in 2007 Idetectedthat'long double' data type is not declaredinOpenGL and Ididn't pay attention to it... What a great subject we have now! :)

Best regards,
Sergey

imagem de iliyapolak

Some time in 2007 Idetectedthat'long double' data type is not declaredinOpenGL

@Sergey
Are you fluent in OpenGL programming?
Finally I have recieved F. Luna book on DirectX 11programming and this book coupled with Matt Pharr "Physically based rendering" will give me a lot of knowledge in the subject of computer graphics from the practical and theoretical point of view.

imagem de Sergey Kostrov

Sorry. This is a test.
Best regards,
Sergey

imagem de iliyapolak

>>>Some time in 2007 Idetected that'long double' data type is not declaredinOpenGL.
I think that DirectX has 128-bit vector composed from 32-bit scalars representing clour components used by High Dynamic Range rendering.

imagem de Sergey Kostrov

This is a test of posting a small test case:

#include

void main( void )
{
printf("Hello New IDZ website...\n");
}

I simply wanted to see how the new edit control re-formats the codes as soon as they are posted. By the way, what did happen with the old source codes editor? I really liked it...

Best regards,
Sergey

imagem de Sergey Kostrov

What I can see that it deleted 'stdio.h' between 'arrow-left' and 'arrow-right'. Also, 5 space characters before 'printf' are also deleted...

imagem de iliyapolak

@Sergey!
Many of our posts simply disappeared when the forum was redesigned.Can you see this issue while looking at your posts?

imagem de Sergey Kostrov

>>...Many of our posts simply disappeared when the forum was redesigned.Can you see this issue while looking at your posts?
>>
I can't see any private posts as well.

imagem de Sergey Kostrov

This is a test of posting a small jpg-image and a small txt-file with a test-case. Unfortunately, upload of source files with extensions h, c and cpp is no longer allowed.
...
A question to IDZ website developers: Why do we need a so "important" functionality like "Drag to re-order"?

Anexos: 

AnexoTamanho
Download testimage.jpg1.18 KB
Download testapp.txt91 bytes
imagem de iliyapolak

[quote]>>...I can't see any private posts as well.[quote]

There is no such a option like a "private" post.
@Sergey
Two or three weeks ago you proposed a summary of my "Optimization of sine taylor expansion" thread. Will it be possible for you
to do such a thing.

imagem de Sergey Kostrov

>>...Two or three weeks ago you proposed a summary of my "Optimization of sine taylor expansion" thread. Will it be possible
>>for you to do such a thing.
.
Yes and I will put that task on my schedule. Unfortunately, during last a couple of weeks I was really busy with different things.

imagem de Sergey Kostrov

>>..."Optimization of sine taylor expansion" thread...

Hi Iliya,
Let's continue discussion in your "Optimization of sine Taylor expansion" thread. To be honest, it doesn't look good.
Best regards,
Sergey

imagem de iliyapolak

>>>...Let's continue discussion in your "Optimization of sine Taylor expansion" thread.

Ok.

Faça login para deixar um comentário.