Intel® C++ Compiler

internal compiler error 0_1279

I got an internal compiler error 0_1279 using an evaluation version of the Intel Compiler
for Linux version 9.1.038 for EM64T.
I reduced it down to a small code sample. The folling code gives the internal
error.

extern "C" {
unsigned long _lrotl( unsigned long, unsigned int );
}

unsigned int f()
{
unsigned int a = 1;
unsigned int b = 2;
a = _lrotl( a, b );
return a;
}

batch build problem

Hi,

Ihave a Visual Studio 2003 solution which builds correctly with the Intel C++ compiler.

Calling the Intel C++ compiler from the command line with the /IntelSpecific "Microsoft" option, results in a linker error:

Linking...(Intel C++ Environment)
(0): internal error: backend signals

xilink: error: problem during multi-file optimization compilation (code 4)
xilink: error: problem during multi-file optimization compilation (code 4)

Why is the Intel C++ linker used? Is the /IntelSpecific "Microsoft" option not sufficient?

Intel compiler 9.1 EM64T stopping warning 1648

I have a (safe) cast from a pointer type to uintptr_t in a header that results in many distracting warnings with /Wp64 "on".

uintptr_t hash()const { return reinterpret_cast(pi_ptr_) ^ ((reinterpret_cast(pi_ptr_))>>3); }

warning #1684: conversion from pointer to same-sized integral type (potential portability....

/Wp64 highlights other (non-safe) issues, so I just want to disable this one warning, but leave /Wp64 on.

#pragma warning (disable : 1684) does not stop the warning

ia64 floating point optimization bug (?)

Hi,

Consider the following short program. No matter what
I do (except -O0), I get 0.001 twice. Should get 0.001 and 0, if arithmetic is done in IEEE double precision.

I've tried flags
-IPF-fltacc -mp -IPF-flt-eval-method0 -mieee-fp that are
supposed to maintain floating point precision.

This is icc-9.0 (20051201) on Itanium 2, where the problem
was encountered while tring to compile ARPREC package. (Works fine in other platforms.)

#include

/* C = 2^52 */
#define C 4503599627370496.0

Compiler Warning with Intel c++ 64 bit compiler on Linux environment

I am using Intel c++ 64 bit compiler on Linux enviornment using the following command options.

/apps/Intel/c++/intel_cce_80/bin/icc -g -mp-fpic -Dlnx -Dcat_lnx -cxxlib-icc -DDEBUG -I../../dyn_lib/include/custom -c -o DrawData.o DrawData.c

I am getting the following warning message with 64 bit compiler.

a) "icc: Command line warning: ignoring option '-c'; no argument required".

Compiling OCTAVE

Im trying to compile OCTAVE in suse linux AMD 64 using icc 9.1 and ifort 9.1 (both for 64 bits). After 7 minutes of successfull compilation, i get the following error:

(0): internal error: backend signals

compilation aborted for file-io.cc (code 4)

make[2]: ** [file-io.o] Erro 4

approaching gcc behaviour with icc

Hi,
I wonder whether there is any standard chain of options I could pass to
icc, in order to approach the behaviour (i.e., set of optimizations
applied) of a specific version of gcc, at a specific optimization
level.
I don't expect, actually, such issues to be formally documented in icc
manuals, but it would be interesting to know whether any programmer out
there has experimented in this direction (i.e., to 'simulate' gcc
with icc), and has come to any interesting conclusions...

Name lookup problems for icc 9.1 and boost/dynamic_bitset

In my project, I make heavy use of the boost libraries, especially boost/dynamic_bitset. All worked fine with icc 9.0 (and several gcc version including 4.0), but when I switched to icc 9.1, I get error messages in the boost library code:
icpc bitset.cc
/usr/include/boost/dynamic_bitset/dynamic_bitset.hpp(1435): error: name followed by "::" must be a class or namespace name
const ios_base::iostate ok = ios_base::goodbit;
^

Precision of debug vs optimized build

We are trying to get our debug and optimized builds to give bit-for-bit identical answers for our regression testing (this is scientific, numerically intensive software). We get different answers whether we use optimization or not. Curiously, the answers are "bad" when optimization is off.

My understanding is that with no optimization, intermediate results are computed using 64-bit registers instead of 80-bit. Is there a way to get around this, so that we have one build with -g and no optimization (for debugging) that gives the exact same results as an optimized build?

Problem using multiple threads using Linux C++ compiler 9.1 and Eclipse

Greetings to all,

I am currently working on a video tracking application and trying to
implement a fast adaptive background estimation algorithm for my
program using IPP 5.0. My system is a Dual Xeon @ 2.8 GHz with 2 GB of
RAM running SuSe 9.3. Let me relate my troubles...

Subscribe to Intel® C++ Compiler