Support Vector Machine Classifier
Support Vector Machine (SVM) is among popular classification
algorithms. It belongs to a family of generalized linear
classification problems. Because SVM covers binary classification
problems only in the multi-class case, SVM must be used in
conjunction with multi-class classifier methods.
SVM is a binary classifier. For a multi-class case, use
Multi-Class Classifier framework of the library.
Details
Given
of size
, where
describes the class to which the feature vector
belongs, the problem is to build a two-class Support Vector Machine
(SVM) classifier.
n
feature vectors
p
and a vector
of class labels
Training Stage
oneDAL provides two methods to train the SVM model:
The SVM model is trained to solve the quadratic optimization problem
with
,
,
where
,
with
, and
is a kernel function.
i = 1, …, n
,
e
is the vector of ones, C
is the upper bound of the
coordinates of the vector
Q
is a symmetric matrix of size
Working subset of α updated on each iteration of the algorithm is
based on the Working Set Selection (WSS) 3 scheme [Fan05].
The scheme can be optimized using one of these techniques or both:
Cache: the implementation can allocate a predefined amount of memory to store intermediate results of the kernel computation. Shrinking: the implementation can try to decrease the amount of kernel related computations (see [Joachims99]).
The solution of the problem defines the separating hyperplane and
corresponding decision function
where only those
that
correspond to non-zero
appear in the sum, and
is called a classification
coefficient and the corresponding
is called a support
vector.
b
is a
bias. Each non-zero
Prediction Stage
Given the SVM classifier and
, the problem is to calculate the signed value of the
decision function
,
. The sign of the
value defines the class of the feature vector, and the absolute
value of the function is a multiple of the distance between the
feature vector and separating hyperplane.
r
feature vectors
Usage of Training Alternative
To build a Support Vector Machine (SVM) Classifier model using methods of the Model Builder class of SVM Classifier,
complete the following steps:
- Create an SVM Classifier model builder using a constructor with the required number of support vectors and features.
- In any sequence:
- Use thesetSupportVectors,setClassificationCoefficients, andsetSupportIndicesmethods to add pre-calculated support vectors, classification coefficients, and support indices (optional), respectively, to the model. For each method specify random access iterators to the first and the last element of the corresponding set of values [ISO/IEC 14882:2011 § 24.2.7]_.
- UsesetBiasto add a bias term to the model.
- Use thegetModelmethod to get the trained SVM Classifier model.
- Use thegetStatusmethod to check the status of the model building process. IfDAAL_NOTHROW_EXCEPTIONSmacros is defined, the status report contains the list of errors that describe the problems API encountered (in case of API runtime failure).
If after calling the getModel method you use the
setBias
, setSupportVectors
, setClassificationCoefficients
, or setSupportIndices
methods, coefficients, the initial model will be automatically updated with the new set of parameters.Examples
C++ (CPU)
Java*
There is no support for Java on GPU.
Batch Processing
SVM classifier follows the general workflow described in
Classification Usage Model.
Training
For a description of the input and output, refer to Usage Model:
Training and Prediction.
At the training stage, SVM classifier has the following parameters:
Parameter | Default Value | Description |
---|---|---|
algorithmFPType | float | The floating-point type that the algorithm uses for intermediate computations. Can be float or double . |
nClasses | 2 | The number of classes. |
C | 1.0 | The upper bound in conditions of the quadratic optimization problem. |
accuracyThreshold | 0.001 | The training accuracy. |
tau | Tau parameter of the WSS scheme. | |
maxIterations | 1000000 | Maximal number of iterations for the algorithm. |
cacheSize | 8000000 | The size of cache in bytes for storing values of the kernel matrix.
A non-zero value enables use of a cache optimization technique. |
doShrinking | true | A flag that enables use of a shrinking optimization technique. This parameter is only supported for defaultDense method. |
kernel | Pointer to an object of the KernelIface class | The kernel function. By default, the algorithm uses a linear kernel. |
Prediction
For a description of the input and output, refer to Usage Model:
Training and Prediction.
At the prediction stage, SVM classifier has the following parameters:
Parameter | Default Value | Description |
---|---|---|
algorithmFPType | float | The floating-point type that the algorithm uses for intermediate computations. Can be float or double . |
method | defaultDense | Performance-oriented computation method, the only prediction method supported by the algorithm. |
nClasses | 2 | The number of classes. |
kernel | Pointer to object of the KernelIface class | The kernel function. By default, the algorithm uses a linear kernel. |
Examples
oneAPI DPC++
Batch Processing:
oneAPI C++
Java*
There is no support for Java on GPU.
Batch Processing:
Python* with DPC++ support
Batch Processing:
Python*
Batch Processing:
Performance Considerations
For the best performance of the SVM classifier, use homogeneous
numeric tables if your input data set is homogeneous or SOA numeric
tables otherwise.
Performance of the SVM algorithm greatly depends on the cache size
cacheSize. Larger cache size typically results in greater
performance. For the best SVM algorithm performance, use cacheSize
equal to
. However, avoid
setting the cache size to a larger value than the number of bytes
required to store
data elements because the algorithm
does not fully utilize the cache in this case.
Optimization Notice |
---|
Intel’s compilers may or may not optimize to the same degree for
non-Intel microprocessors for optimizations that are not unique to
Intel microprocessors. These optimizations include SSE2, SSE3, and
SSSE3 instruction sets and other optimizations. Intel does not
guarantee the availability, functionality, or effectiveness of any
optimization on microprocessors not manufactured by Intel.
Microprocessor-dependent optimizations in this product are intended
for use with Intel microprocessors. Certain optimizations not
specific to Intel microarchitecture are reserved for Intel
microprocessors. Please refer to the applicable product User and
Reference Guides for more information regarding the specific
instruction sets covered by this notice. Notice revision #20110804 |