Why Intel's DevCloud inference, and load time numbers are inconsistent?

Why Intel's DevCloud inference, and load time numbers are inconsistent?


Hi All, 

I'm doing an Intel Edge IoT course from Udacity. The result from Intel's DevCloud output data is inconsistent most of the time like Inference time, load time?

It would be great If Intel can come up with a backend script that shows the core utilization or tell us how we can monitor the backend activity.

We don't know what is happening in the back end once we load our model through the job queue. Not a developer-friendly, To be honest. We need some cool UI to analyze the Hardware configuration. I'm glad to add my suggestion after the compilation of this tutorial.

 

Is it possible to set up this in the local env and access the dev cloud?

 

Thank You,

Jegathesan S

2 posts / 0 new

Hi,

Thank you for your response. Would you be able to provide more details? 

1. Which tutorial are you trying to run?

2. How many runs did you execute?

3. Can you share the results of a couple of runs along with device chosen (eg CPU/GPU/VPU)

Leave a Comment

Please sign in to add a comment. Not a member? Join today