Least-Squares Support Vector Machine-Based Cancer Prediction System

Cancer Prediction, Multiple Feature Classification, System Using Least-Squares Support Vector Machine, Traditional Combination.

Authors

  • Sivakani. R Department of Artificial Intelligence and Data Science, Dhaanish Ahmed College of Engineering, Chennai, Tamil Nadu, India
  • Rajasekaran G Department of Computer Science Engineering, Dhaanish Ahmed College of Engineering, Chennai, Tamil Nadu, India.
  • P.Velavan Department of Computer Science Engineering, Dhaanish Ahmed College of Engineering, Chennai, Tamil Nadu, India.
  • B. Vaidianathan Department of Electronics & Communication Engineering Dhaanish Ahmed College of Engineering, Chennai, Tamil Nadu, India.
May 6, 2024

Downloads

Support vector machines, in the field of machine learning, are supervised learning models that examine data for classification and regression using learning methods that are connected with them. An SVM training algorithm constructs a model that allocates new examples to one of two categories based on a set of training examples, where each example is marked as belonging to one of the two sets. Using SVMs, datasets with unequal class frequencies can be handled. It is possible to set the slack penalty for positive and negative classes to different values in many implementations (asymptotically equivalent to changing the class frequencies).  Improving classification algorithms or balancing classes in the training data (data preparation) before giving the data as input to the machine learning algorithm are techniques to deal with imbalanced datasets. Because of its more generalizability, the second method is better. In this research, we offer a strategy for efficiently classifying data with many features using a support vector machine (SVM). Instead of training all of the base classifiers required for a decision combination in advance, as is the case with typical combination methods, the suggested method trains each classifier separately and then combines their choices all at once. Because our suggested method entails addressing a single optimization problem, rather than the multiple optimization problems that existing methods need, training complexity can be decreased. In addition, while combining base classifiers, our suggested method takes their performance on training data and their ability to generalise into account. But conventional combination methods just look at how well a base classifier did on the training set. The results of the experiments validated the effectiveness of our strategy.

Similar Articles

<< < 26 27 28 29 30 31 32 > >> 

You may also start an advanced similarity search for this article.