Not compilable generated code

Not compilable generated code

Hello,

I am testing a simplest test as possible on Flow Graph Designer, the same as this video:

https://www.youtube.com/watch?v=dSY0dtvA6w4

That generate this code:

int main( int argc, char *argv[] ) {

    graph MainThreads0;

    source_node< int > beginFrame(MainThreads0, beginFrame_body(), false );
    function_node< int, int > GraphicThread(MainThreads0, 1, GraphicThread_body() );
    function_node< int, int > EngineFrame(MainThreads0, 1, EngineFrame_body() );
    join_node< flow::tuple< int, int >, queueing > join(MainThreads0);
    function_node< int, int > EndFrame(MainThreads0, 1, EndFrame_body() );
    make_edge( beginFrame, GraphicThread);
    make_edge( beginFrame, EngineFrame);
    make_edge( GraphicThread, input_port< 0 >( join ));
    make_edge( EngineFrame, input_port< 1 >( join ));
    make_edge( join, EndFrame);
    beginFrame.activate();
    MainThreads0.wait_for_all();
    return 0;
}

And with Intel Compiler C++ 2013 SP1 I got this error:

1>main.cpp(190): error : no instance of function template "tbb::flow::interface7::make_edge" matches the argument list
1>              argument types are: (tbb::flow::interface7::join_node<std::tuple<int, int>, tbb::flow::interface7::internal::graph_policy_namespace::queueing>, tbb::flow::interface7::function_node<int, int, tbb::flow::interface7::internal::graph_policy_namespace::queueing, tbb::cache_aligned_allocator<int>>)
1>    	make_edge( join, EndFrame );
1>    	^

The code generation was breaked with a new version of tbb?

Thanks

4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Hi,

From your code snippet, I can see that the input type of the EndFrame node does not match the output type of the join.  The join generates a tuple< int, int > but EndFrame expects an int at its input.   The make_edge fails because these types do not match.   Are the types that you specified for these nodes in Flow Graph Designer correct?  If you run a rule-check in Flow Graph Designer it should complain about this mismatch.

Mike

It is easier to accuse the tool :) But the sample "feature_detection.graphml" fail the check to.

I did find an exaustive list which is discribe all node etc.

I want "BeginFrame" this one

"Fork" 2 tasks: "GraphicThread" and "EngineThread"

And I would like join thoses thread and "After" execute "EndFrame" task.

And this as infinite loop [BeginFrame -> [GraphicThread And EngineThread] -> EndFrame]

And parallel to all of them a "LoadingThread".

How can I modelize this idea?

Thanks

It does look like your code is *almost* correct.

You have a source_node, beginFrame, which generates an “int” output.   You have to successors to that node, GraphicThread and EngineFrame.  Both of these are function_node objects that each receive an “int” and output an “int”.   The output of these two nodes are joined together by the join_node, “join”.  Here’s where the error was.   The first template argument “tuple< int, int >” to the join_node says that the output of the join_node will be of type “tuple< int, int >” and also implies that the join has two input ports, each receiving one of the “int”s.   The output of the join is connected to EndFrame, which is a function_node.   However, EndFrame is created as a function_node< int, int >.   The first template argument specifics that the input type is “int” and the second specifies that output type is “int”.   So there is a mismatch since the join generates an output of type tuple< int, int > and EndFrame expects inputs of type “int”.

You can fix this in Flow Graph Designer by right-clicking on the EndFrame node and selecting “Edit Port Properties”.  You can then modify the Input port 0 to accepts inputs of type “tuple< int, int >”.

You are correct that the feature_detection.graphml included in the distribution has the same issue.   That example file was provided to show the output after runtime trace collection; not to provide a sample that can used to generate C++ code.   Sorry about the confusion; this sample should be fixed to not only provide traces but also be validated for generating C++ code.   (As an aside, it’s not possible to collect complete type information during runtime trace collection, so a graphml file generated from runtime tracing is not always sufficient to regenerate the C++ code.  The feature_detection.graphml file in the distribution was created by runtime tracing and therefore is incomplete.).

You have also asked a second question about running a LoadingThread in parallel to this whole other graph.   Do you want the LoadingThread to be a node that executes once for each frame, or do you want a thread that continually executes in parallel with the rest of the graph?    

Leave a Comment

Please sign in to add a comment. Not a member? Join today