Q&A from Intel IPP Webcast on 11/6

Q&A from Intel IPP Webcast on 11/6

Ying Song (Intel)'s picture

Hi all,


Yesterday, I joined one of our technical web series about Multithreading tools and techniques , talked about how Intel IPP implements and enhances multithreading from low level primitives to high level multimedia sample coding especially for digital medial application. It was a great experience for me at web cast event to share our product with developers who are interested in learning Intel Software Tools. During this live broadcast webcast, we received the numerous questions about the Intel IPP and other tools, the following was a selected list of questions and answers from this web cast, we thought these may be useful to other developers as reference.


Q: could you please pass the URL of this program to register for more webinars?
A: <>;

Q: Do IPP support 3D interpolation http://event.on24.com/clients/default/mainpages/on24/dynMain.html# http://event.on24.com/clients/default/mainpages/on24/dynMain.html#and smoothing?
A: No, we only have 3D resize. but you can go to Intel IPP support website: https://premier.intel.com to submit a feature request so IPP engineering team will consider it in the future release...


Q: What degree of parallelism can ICC extract when compiling sequential code base?
A: As I knew, with ICC, you can get 1)SSE(2/3) level parallelism 2)for some loops, it can automotically mulithread the loop. In addition, you can implement parallelism by OpenMp pragma, which icc will support it. For more informaiton, please contact Intel Compiler support website < and our ocmpiler support engineer will assist you.



Q: if an app is written with IPP and/or TBB does it take advantage of AMD chips as well as INTEL?
A: It performs better on Intel architecture, but it also support non Intel processor AMD, if one of these processors are compatible with Intel SSE * instructions, IPP will dispatch the appropriate libraires and achieve reasonable performance on non-Intel processors



Q: So, basically, IPP is a multi-threaded library for Digital Processing workloads, optimized for Intel architectures?
A: Yes, optimized for Intel architectures


Q: How IPP benefit from Thread Level Parallelism (in bottom of slide 9)? There is no explain for it.
A: Many IPP functions are threaded internally. By calling the IPP function, users' code can get benefit from Thread. Also all IPP functions are thread safe. It can be used in the users' threaded code.



Q: Can IPP work with Intel IXP 28xx network processors?
A: Intel IPP supports Intel IXP4xx Processors with Intel XScale technology.



Q: Does IPP include Bayer demosaicing routine?
A: I think, the answer is no. But please feel free to submit feature request to our IPP support website, where we will collect all feature requests and consider them in product definition meeting based on market request.



Q: i read somewhere if we attend 3 or more of these seminars live we get the TBB book that is mentioned, is there something we have to do or will you just send it?
A: All those attending 3 or more live webcasts will get an email at conclusion of series sending to webpage to give shipping address



Q: After linking with the ippi.lib, and a new processor becomes available, will it be automatically supported once updated IPP DLLs become available, or does this require a recompile/relink of my application?
A:
Yes you may need to recompile and relink of your application


Q: what is kernel mode?
A:
IPP can be used in the driver application, which will run in the kernel model



Q: How about AMD64 processor support for IPP?
A:
yes, we are supporting non Intel processors and also achieve seasonal performance.



Q: How about AMD64 processor support for IPP?
A:
Please visit ; for system requirement document


Q: Are the libraries, saying dlls and libs, useful for Linux also?
A:
Yes, in Linux , it also contain shared and static libraries.


Q: is it possible to run application linked with ipp on i.e. AMD processors (for compatibility)?
A:
Yes, It runs and support on AMD processors.



Q: can we use this on smart devices? pocket pc?
A:
Yes, Depends upon OS and processor. Please refer to system requirement at Intel IPP web site


PAN>


Q: Will IPP work on AMD processors?
A:
Yes, Please refer system Req. document <>;


Q: what is the ipp support request web address to submit a request for Bayer demosaicing routine
A:
submit your issue at https://premier.intel.com in any of Intel IPP products in Premier.



Q: What happen when application builded with using IPP will be runned for example on the AMD or VIA processor? Thanks.
A: Yes, Intel IPP can run on Intel-compatible processors.


Q: Can IPP work in .NET under Visual Studio IDE?
A: Yes. You can develop apps with .NET using IPP libraries under MSVS IDE.


Q: So you don't have to set the number of threads manually?
A:
On intel platform, it will take default number of processor equal to number of thread. To set manually, we have functions. Please refer below webpage http://www.intel.com/support/performancetools/libraries/ipp/sb/cs-010662.htm



Q: If you call the ipp from 2 different threads will the ipp create double the number of threads or is the number of threads set to one number?
A:
the default number of threads for IPP dynamic libraries is equal to the number of processors in the system.
But IPP includes some threading control functions. Users can use these functions to control the threading numbers they want to use.



Q: Under .NET is IPP using the managed memory model?
A:
No, you have to use ipp memory management routines.


Q: Why wuld I want to explicitly set the number of threads from the default in my application? That seems to be removing the abstraction layer?
A:
User can use the default threading number. But some users need more flexibility on threading control. they can control threading number by their code. so they you IPP threading control functions.



Q: Do IPP functions internally use OpenMP or similar frameworks?
A:
Yes, IPP function use OpenMP internally.


>


Q: For portability I use industry-standard BLAS (Basic Linear Algebra Subroutines) calls. Does IPP include functions using the standard BLAS calling sequences?
A: We recommend to check out Intel Math Kernal Library for complete BLAS functions upport. Http://www.intel.com/software/products/mkl and also register the upcoming MKL talk in early December


Q: For portability I use industry-standard BLAS (Basic Linear Algebra Subroutines) calls. Does IPP include functions using the standard BLAS calling sequences?
A: You can use another performance library named intel MKL, which support BLAS, LAPACK etc.


Q: What will happen when a program compiled with IPP is run on an AMD processor?
A:
It works fine and achieves reasonable performance.



Q: is dispatching determined on a capability level (ISSE level) or on a processor type level, i.e. will it make use of ISSE instructions e.g. on AMD processors?
A:
yes. it will take advantage of Intel SSE instructions.


Q: Is it planed to create ipp functions specialized for communication too?
A:
we, have signal processing, data compression and cryptography library. Check out more details from Intel IPP manuals to find more details.



Q: Visual Studio has a memory managed C++, but not IPP. Now I know how to marshal to IPP from a C# application, it's non-memory-managed C++.
A:
yes, many of our customers use C++ IPP libs from C#. We also have some samples on how to call IPP from C#.



Q: What does it mean that IPP threads are self-contained?
A:
Some IPP functions are threaded internally. Please find the completed threaded APIs at http://support.intel.com/support/performancetools/libraries/ipp/sb/CS-026584.htm For the latest version 5.3, you can find the threaded API list in doc directory



Q: Does Intel IPP has any function to process string or XML?
A:
IPP has highly optimized string processing, Intel also released an XML processing library recently which is not part of IPP. Please see our XML product page here http://www3.intel.com/cd/software/products/asmo-na/eng/3350
35.htm



Q: What is the price of IPP?
A:
Please visit website for pricing http://www3.intel.com/cd/software/products/asmo-na/eng/238658.htm



Q: how can I test intel IPP?
A:
Join eval product via Intel IPP web site (http://www.intel.com/software/products/ipp)



Q: Do you support JPEG lossless as required by DICOM?
A:
Yes, we are supporting. We have sample code, refer jpegview sample http://www3.intel.com/cd/software/products/asmo-na/eng/220046.htm , this is a new feature in Intel IPP v5.3



Q: Does IPP work on AMD Processors?
A:
Yes, IPP works on Intel-Compatible processors.


Q: what is the thread model in IPP? Is there overhead on the threads creation or they are in a thread pool and assigned different job?
A:
Intel IPP use OpenMP to implement threading. The OpenMP* threading library will maintain thread pools.


Q: if I compile on one intel platform (2x CPU) can I run it on another intel platform (4xCPU)?
A: Yes, it will work fine. Also it will take advantage of quad core.



Q: I see that you support Linux. Is the IPP code footprint small enough to use in an embedded Intel system?
A:
Yes, small foot print is possible, Please refer this link for different linkage model. Choose best link with small foot print http://www.intel.com/support/performancetools/libraries/ipp/sb/CS-021491.htm


Q: are the integrated performance primitive for only performance than quality or both?
A:
for both and even more!



Q: Does IPP interact well with TBB? For instance what if there is a tbb::parallel_for loop that calls the IPP functions. Will the IPP functions generate the default number of threads for it's own library, even if the parallel_for loop has already split the loop up to the number of hardware threads?
A:
No at this time, our engineering team is investigating this with TBB and look for
ward to a possible sample in future IPP releaes to demonstrate the intel IPP usage with TBB. If you have any further concerns, please contact us via Intel Premier.



Q: But with what typical performance penalty, especially on multi-core?
A: Can you clarify the question? If threading on multi-core, the typical performance penalty are threading schedule, false sharing etc.


Q: do the processor ID function also ID non-intel cpus?
A:
Yes, please check the function ippGetCpuType from support functions listed in ippsMan.pdf for details, as long as the non-Intel CPU has compatible SSE support, the IPP CPU ID function will recognize them too.


Q: Are IPP data compression functions internally threaded to take advantage of Intel multi-core processors? If so, how large is an improvement in performance on quad-core?
A: Yes, please check the latest version 5.3 for further test and benchmark. We enhanced data compression functions and performance in this version.


Thanks again for your interest.

Regards,
Ying Song


1 post / 0 new
For more complete information about compiler optimizations, see our Optimization Notice.