The documentation is still being put together after the project adopted a new, unified learning API that is now common for all machine learning models. Most of it has just been updated yesterday, but a few parts might still need attention.
Answering your original question, you can find an example for Polynomial SV regression below. Let's say that we have 2-dimensional input vectors, and we would like to learn a mapping from those vectors into a single scalar value.
// Declare a very simple regression problem
// with only 2 input variables (x and y):
double[][] inputs =
{
new[] { 3.0, 1.0 },
new[] { 7.0, 1.0 },
new[] { 3.0, 1.0 },
new[] { 3.0, 2.0 },
new[] { 6.0, 1.0 },
};
double[] outputs =
{
65.3,
94.9,
65.3,
66.4,
87.5,
};
For the sake of example, we will set the machine Complexity parameter to a very high value, forcing the learning algorithm to find hard-margin solutions that would otherwise not generalize very well. When training in real-world problems, leave the properties UseKernelEstimation and UseComplexityHeuristic set to true or perform a grid search to find their optimal parameters:
// Create a LibSVM-based support vector regression algorithm
var teacher = new FanChenLinSupportVectorRegression<Polynomial>()
{
Tolerance = 1e-5,
// UseKernelEstimation = true,
// UseComplexityHeuristic = true
Complexity = 10000,
Kernel = new Polynomial(degree: 1) // you can change the degree
};
Now, after we have created the learning algorithm, we can use it to train a SVM model from the data:
// Use the algorithm to learn the machine
var svm = teacher.Learn(inputs, outputs);
And finally we can get the machine's answers for the set of inputs and we can check how good the values predicted by the machine were in comparison to the expected ground-truth:
// Get machine's predictions for inputs
double[] prediction = svm.Score(inputs);
// Compute the error in the prediction (should be 0.0)
double error = new SquareLoss(outputs).Loss(prediction);