Recent posts
https://software.intel.com/en-us/recent/73397
enNumericTable from multinomial_naive_bayes training result has no data dict
https://software.intel.com/en-us/forums/topic/560598
<p>On Linux, with version 2016.0.069 (the most recent beta version) it looks like it is possible to get a NumericTable from a daal::algorithms::classifier::prediction::Result. However, if I try to do this with the multinomial_naive_bayes example, adding the following lines to the end of the trainModel() function:</p>
<pre class="brush:cpp;"> /* Retrieve algorithm results */
trainingResult = algorithm.getResult();
/* Additional lines below */
SharedPtr<NumericTable> t = trainingResult->get(classifier::training::model);
std::cout << t->getNumberOfRows() << std::endl;
std::cout << t->getNumberOfColumns() << std::endl;
</pre><p>then getNumberOfRows() returns 0, and getNumberOfColumns() segfaults:</p>
<pre class="brush:plain;">Program received signal SIGSEGV, Segmentation fault.
0x000000000041f116 in daal::data_management::Dictionary<daal::data_management::NumericTableFeature>::getNumberOfFeatures (this=0x0)
at /opt/intel/compilers_and_libraries_2016.0.069/linux/daal/include/data_management/data/data_dictionary.h:344
344 return _nfeat;
(gdb) bt
#0 0x000000000041f116 in daal::data_management::Dictionary<daal::data_management::NumericTableFeature>::getNumberOfFeatures (this=0x0)
at /opt/intel/compilers_and_libraries_2016.0.069/linux/daal/include/data_management/data/data_dictionary.h:344
#1 0x000000000041647e in daal::data_management::NumericTable::getNumberOfColumns (
this=0x67a740)
at /opt/intel/compilers_and_libraries_2016.0.069/linux/daal/include/data_management/data/numeric_table.h:506
#2 0x00000000004142d9 in trainModel () at naive_bayes2.cpp:107
</pre><p>
where it appears that the NumericTable's _ddict is null.</p>
<p>Should it always be possible to call getNumberOfRows() and getNumberOfColumns() on a NumericTable? If it isn't the case, then should these methods be throwing an exception (or setting an error status) instead of segfaulting when called?</p>
<p>Many thanks in advance.</p>
Fri, 26 Jun 15 04:30:52 -0700Graham M.560598Column sorting
https://software.intel.com/en-us/forums/topic/560525
<p> </p>
<p>Given matrix A of size MxN, I would like to sort column-wise. In Matlab this can be solved easily and quickly by sort(A) and it can even sort row-wise by sort(A,2).</p>
<p>In fortran, I don't find this and thus far I just iteratively sort each column using dlsart2 but this is definitely much slower than MATLAB. I am sure someone must have run into this issue and I hope you can help me speed this up, perhaps along these lines:</p>
<p>1. Is there any column sorting subroutine that works faster then iterating dlsart2, which costs MN log M assuming that each iterate is M log M.</p>
<p>2. Is dlsart2 faster than any other sorting functions? Have anyone compare this it with dpsort.f90 of slatec or some sorting functions from orderpack? I found that dlsart2 is two times faster than qsortd.f90.</p>
<p>3. Is there anything else that is faster than dlsart2 that can beat MATLAB sort function?</p>
<p>John </p>
<p> </p>
<p> </p>
<p> </p>
Wed, 24 Jun 15 12:49:36 -0700john h.560525SVM Accuracy
https://software.intel.com/en-us/forums/topic/558876
<p>I am working with the DAAL SVM. There is a procedure to get the predicted labels for a test dataset:</p>
<p>PredictionResult predictionResult = algorithm.compute();</p>
<p>NumericTable PredictionResult predictionResult = algorithm.compute();</p>
<p>NumericTable predictionResults = predictionResult.get(PredictionResultId.prediction);</p>
<p> </p>
<p>predictionResults is an array of predicted labels.</p>
<p>Is there a way to get the prediction accuracy and the probability of prediction labels (similar to LibSVM or Logistic Regression).</p>
<p>The role of PredictionResultId.prediction is not evident to me. Do we need to extend the PredictionResultId class to get the accuracy and probabilities?</p>
<p> </p>
<p>-Hemanth</p>
Tue, 26 May 15 14:43:18 -0700Hemanth D. (Intel)558876Question about VDRNGUNIFORM
https://software.intel.com/en-us/forums/topic/557753
<p>Hi, there,</p>
<p>I am trying to generate 100 3-dimensional quasi-random vectors in the (2,3)^3 space by following the instruction on page 32 of "Intel Math Kernel Library Vector Statistical Library Notes." The illustrative example it provides looks like the following:</p>
<p>include <stdio.h> <br />
include “mkl.h” <br />
float mat[100][3]; /* buffer for quasi-random numbers */ <br />
VSLStreamStatePtr stream; <br />
/* Initializing */ <br />
vslNewStream( &stream, VSL_BRNG_SOBOL, 3 ); <br />
/* Generating */ <br />
vsRngUniform( VSL_METHOD_SUNIFORM_STD, stream, 100*3, (float*)mat, 2.0f, 3.0f ); <br />
/* Deleting the streams */ <br />
vslDeleteStream( &stream );</p>
<p>My question is: To my knowledge, the random number generator generates a series of numbers in sequential and puts them into the matrix in <strong>column-major order</strong>. Then, why we generate the 100 3-dimensional vectors by specifying it as mat[100][3] not the other way around like mat[3][100]. </p>
<p>Thanks,</p>
<p>R.</p>
Mon, 11 May 15 15:25:41 -0700li r.557753How do I use this library?
https://software.intel.com/en-us/forums/topic/544653
<p>How do I use this DAAL library?</p>
Fri, 27 Mar 15 04:07:27 -0700Vipin Kumar E K (Intel)544653Data fit dfdInterpolateEx1D function warning code
https://software.intel.com/en-us/forums/topic/543144
<p>Hello everybody, I am having some problems with the dfdInterpolateEx1D function of the MKL Data Fit module. I am calling the above function, togheter with all the relative initialization/setting/clean function, within a loop and the results are sometimes corrects and sometimes wrong. Particularly, the wrong results provided by the above function are always equal to 0 and in correspondence to these events the error/warning status is equal to 10. As described in the library manual, positive status are associated to warning messages however I can not find the warning status 10 in the mkl_df_defines.h header file which, according to the manual itself, contains all the defined error/warning status. What can I do to understand what is wrong with my code?</p>
<p>The version of the library is defined as INTEL_MKL_VERSION = 110101, I am using the 32 bit MKL interface and I am compiling/linking my code with g++ 4.6.3 on an Ubuntu machine.</p>
<p>Many thanks in advance for your help</p>
<p>Christopher</p>
Fri, 13 Mar 15 04:56:23 -0700Christopher Bignamini543144What is Intel Data Analytics Acceleration Library about? Who should use it?
https://software.intel.com/en-us/forums/topic/542103
Tue, 24 Feb 15 20:44:30 -0800Gennady Fedorov (Intel)542103Usage of Outlier Detection
https://software.intel.com/en-us/forums/topic/535497
<p>I am considering whether to use the outlier detection routine in the Vector Statistical Library (VSL).</p>
<p>Suppose I have an <strong>unknown</strong> monotonic nonlinear function f(x). I somehow evaluated it at 1000 different x points. By ploting these data, I found an almost monotonic and nonlinear curve with one or two dozens of outliers. So can I make use of the mkl routine? <a href="https://software.intel.com/en-us/node/497940">https://software.intel.com/en-us/node/497940</a></p>
<p>The routine deals with n observations on p variables, which sounds like it is for many outputs from p random generators, say, a set of multivariate Gaussian distribution data. If so, my case looks different. I'm not sure about this. Please kindly help me on this.</p>
<p>Thanks in advance!</p>
Mon, 17 Nov 14 00:14:33 -0800kingking r.535497Gibbs sampling solution ?
https://software.intel.com/en-us/forums/topic/533703
<p>Hi</p>
<p>I'm attempting to write a restricted boltzman machine using Gibbs Sampling for a deep learning neural net . I had a look in MKL and didn't find a specific routine so I had a search on the internet and found a C/Java/Python/R/Scala implementation <a href="http://www.r-bloggers.com/mcmc-and-faster-gibbs-sampling-using-rcpp/" rel="nofollow">http://www.r-bloggers.com/mcmc-and-faster-gibbs-sampling-using-rcpp/</a></p>
<p>I created my own implementation using ifort and MKL based on C code I found there and on referenced pages, I'm not a mathematician but I did physics at university 30yrs ago and have written neural nets before so I can follow a formula and I get the rough gist of gibbs sampling but I'm looking at GS as a black box solution</p>
<p>2 questions -</p>
<p>1 is there a ready made MKL solution?</p>
<p>2 The C code from the web runs in just under 8 seconds on my computer, however the Fortran version using gamma and gaussian distribution takes 55 sec which is slower than python. Now I assume this is because the other web progs are using distributions returning scalars rather than a vector of size 1 like me, plus there is no statement as to correctness of implementation of the C/Java/Python etc libs. Indeed , I changed the return vector size in fortran to a large size and proportionally reduced the loop size and the the MKL implementation comes in under 2 seconds, so I'm obviously not doing a like by like comparison. BUT, my simplistic understanding of Gibbs sampling is that x and y need to be cross related across the 2 distributions and I can't think how to do this with a vector of size > 1 to take advantage of the MKL implementation, any ideas?? (I'm using a Mersenne Twister as a direct comparison - I can cut time in half with a simpler method)</p>
<p>thanks</p>
<p>Steve</p>
<p>include 'mkl_vsl.f90'<br />
PROGRAM Gibbs</p>
<p> USE IFPORT<br />
USE MKL_VSL_TYPE<br />
USE MKL_VSL<br />
IMPLICIT NONE<br />
REAL(8) START_CLOCK, STOP_CLOCK<br />
INTEGER status,n,i,j, M, thin<br />
REAL(8), DIMENSION(1) :: x,y<br />
TYPE (VSL_STREAM_STATE) :: stream, stream2<br />
REAL(8) alpha, a</p>
<p>
!VSL_RNG_METHOD_GAMMA_GNORM_ACCURATE<br />
!VSL_RNG_METHOD_GAMMA_GNORM<br />
!VSL_RNG_METHOD_EXPONENTIAL_ICDF_ACCURATE</p>
<p>START_CLOCK = DCLOCK()</p>
<p>n=1<br />
alpha = 3.0<br />
a=1.0<br />
x(1) = 0.0<br />
y(1) = 0.0<br />
M=50000<br />
thin=1000</p>
<p>status = vslnewstream( stream, VSL_BRNG_SFMT19937, 1777 )<br />
status = vslnewstream( stream2, VSL_BRNG_SFMT19937, 1877 )</p>
<p>! f(x|y) = (x^2)*exp(-x*(4+y*y)) ## a Gamma density kernel<br />
! f(y|x) = exp(-0.5*2*(x+1)*(y^2 - 2*y/(x+1)) ## a Gaussian kernel</p>
<p>
do j=1,M<br />
do i=1,thin<br />
status = vdrnggamma(VSL_RNG_METHOD_GAMMA_GNORM, stream, n, x, alpha, a, (1.0/(4.0 + y(1)**2) ) )<br />
status = vdrnggaussian( VSL_RNG_METHOD_GAUSSIAN_ICDF, stream2, n, y, a, 1.0/sqrt(2*x(1)+2) )<br />
y(1) = 1.0/(x(1)+1) + y(1)<br />
enddo<br />
enddo</p>
<p>print*, "X" , x<br />
print*, "Y" , y<br />
STOP_CLOCK = DCLOCK()<br />
print *, 'Gibbs Sampler took:', STOP_CLOCK - START_CLOCK, 'seconds.'</p>
<p>end PROGRAM Gibbs</p>
<p> </p>
Fri, 17 Oct 14 09:18:36 -0700steve o.533703MKL random number stream equivalent to Matlab default RandStream
https://software.intel.com/en-us/forums/topic/531689
<p>Is it possible to generate random numbers with MKL that are equivalent to Matlab random numbers? </p>
<p>I use the following Matlab/ Fortran codes but the results are different</p>
<pre class="brush:bash;">Matlab
stream=RandStream('mt19937ar','Seed',0);
RandStream.setGlobalStream(stream);
reset(stream,0)
fid = fopen('~/Desktop/iseed.bin', 'w');
fwrite(fid, stream.State,'int32');
fclose(fid);
rand(10,1)
% 0.8147
% 0.9058
% 0.1270
% 0.9134
% 0.6324
% 0.0975
% 0.2785
% 0.5469
% 0.9575
% 0.9649</pre><p>Fortran</p>
<pre class="brush:fortran;">include 'mkl_vsl.f90'
program main
USE MKL_VSL_TYPE
USE MKL_VSL
implicit none
integer :: params(625)
real(kind=8) r(10) ! buffer for random numbers
TYPE (VSL_STREAM_STATE) :: stream
integer(kind=4) :: errcode
integer(kind=4) :: i,j
integer :: brng,method,seed,n
open(1, file='~/Desktop/iseed.bin', form='binary')
read(1) params
close(1)
n = 10
brng=VSL_BRNG_MT19937
method=VSL_RNG_METHOD_UNIFORM_STD
seed=0
errcode=vslnewstreamex(stream, brng, 625, params)
! alternative option without matlab stream state
! errcode=vslnewstream( stream, brng, seed )
errcode=vdrnguniform(method, stream, n, r, 0.0d0, 1.0d0)
write(*,'(f)') r(1:10)
errcode=vsldeletestream( stream )
end
! result
0.1327075199224055
0.3464016632642597
0.7798899088520557
0.4143710811622441
0.4759427784010768
0.4244252541102469
0.0815817557740957
0.9338225021492690
0.5113811327610165
0.5184877812862396</pre>Fri, 19 Sep 14 08:49:57 -0700zuch531689