Volume 24 Issue 5
-
Amal Al-Shahrani;Amjad Alghamdi;Areej Alqurashi;Raghad Alzahrani;Nuha imam 1
Individuals with visual impairments face numerous challenges in their daily lives, with navigating streets and public spaces being particularly daunting. The inability to identify safe crossing locations and assess the feasibility of crossing significantly restricts their mobility and independence. Globally, an estimated 285 million people suffer from visual impairment, with 39 million categorized as blind and 246 million as visually impaired, according to the World Health Organization. In Saudi Arabia alone, there are approximately 159 thousand blind individuals, as per unofficial statistics. The profound impact of visual impairments on daily activities underscores the urgent need for solutions to improve mobility and enhance safety. This study aims to address this pressing issue by leveraging computer vision and deep learning techniques to enhance object detection capabilities. Two models were trained to detect objects: one focused on street crossing obstacles, and the other aimed to search for objects. The first model was trained on a dataset comprising 5283 images of road obstacles and traffic signals, annotated to create a labeled dataset. Subsequently, it was trained using the YOLOv8 and YOLOv5 models, with YOLOv5 achieving a satisfactory accuracy of 84%. The second model was trained on the COCO dataset using YOLOv5, yielding an impressive accuracy of 94%. By improving object detection capabilities through advanced technology, this research seeks to empower individuals with visual impairments, enhancing their mobility, independence, and overall quality of life. -
Amal Alshahrani;Sumayyah Albarakati;Reyouf Wasil;Hanan Farouquee;Maryam Alobthani;Someah Al-Qarni 11
While artificial neural networks are adept at identifying patterns, they can struggle to distinguish between actual correlations and false associations between extracted facial features and criminal behavior within the training data. These associations may not indicate causal connections. Socioeconomic factors, ethnicity, or even chance occurrences in the data can influence both facial features and criminal activity. Consequently, the artificial neural network might identify linked features without understanding the underlying cause. This raises concerns about incorrect linkages and potential misclassification of individuals based on features unrelated to criminal tendencies. To address this challenge, we propose a novel region-based training approach for artificial neural networks focused on criminal propensity detection. Instead of solely relying on overall facial recognition, the network would systematically analyze each facial feature in isolation. This fine-grained approach would enable the network to identify which specific features hold the strongest correlations with criminal activity within the training data. By focusing on these key features, the network can be optimized for more accurate and reliable criminal propensity prediction. This study examines the effectiveness of various algorithms for criminal propensity classification. We evaluate YOLO versions YOLOv5 and YOLOv8 alongside VGG-16. Our findings indicate that YOLO achieved the highest accuracy 0.93 in classifying criminal and non-criminal facial features. While these results are promising, we acknowledge the need for further research on bias and misclassification in criminal justice applications -
This study analyzes the global literature on the black open-access phenomenon from 2011 to 2023. A bibliometric analysis was conducted using the Scopus database. The search strategy employed advanced queries with multiple synonymous terms to ensure exhaustive retrieval of relevant documents. The VOSviewer software was employed to visualize the co-occurrence networks. The findings reported 90 papers published during the study period. An evolving scholarly landscape was revealed, with heightened attention from 2016 onwards, peaking in 2017, 2021, and 2023. Articles constitute 83.3% of the total published documents. Singh and Srichandan are prolific authors, with 11.2% of the total publications. The United States contributes 18.9% of the papers, followed by India and Spain. Information Development and Scientometrics are pivotal journals in scholarly discussions about this scope, contributing 4.4% of publications. Co-occurrence network visualization revealed "Sci-Hub" and "open access" as the most used keywords in the global literature. The findings underscore the need for additional research to discover innovative business models to safeguard intellectual property rights while meeting researchers' evolving needs. The importance of this paper comes from being the first bibliometric study analyzing international literature related to this phenomenon, which provides a basis for future research efforts and policymaking.
-
In a previous study we proceeded to the remodularization architecture based on classes and packages using the Formal Concept Analysis (FCA)[13] [14] [30]. we then got two possible remodularized architectures and we explored the issue of redistributing classes of a package to other packages, we used an approach based on Oriented Graph to determine the packages that receive the redistributed classes and we evaluated the quality of a remodularized software architecture by metrics [31] [28] [29]. In this paper, we will address the issue of the efficiency of the Oriented Graph in the remodularization of software architectures compared to the Formal Concept Analysis FCA method. The formal method of FCA concept is not popularized among scientists as opposed to the use of the labeled directed graph. It is for this reason that our directed graph approach is more effective in its simplicity and popularity.
-
Cyber-Physical Systems (CPS) are introduced as complex, interconnected systems that combine physical components with computational elements and networking capabilities. They bridge the gap between the physical world and the digital world, enabling the monitoring and control of physical processes through embedded computing systems and networked communication. These systems introduce several security challenges. These challenges, if not addressed, can lead to vulnerabilities that may result in substantial losses. Therefore, it is crucial to thoroughly examine and address the security concerns associated with CPS to guarantee the safe and reliable operation of these systems. To handle these security concerns, different existing security requirements methods are considered but they were unable to produce required results because they were originally developed for software systems not for CPS and they are obsolete methods for CPS. In this paper, a Security Requirements Engineering Methodology for CPS (CPS-SREM) is proposed. A comparison of state-of-the-art methods (UMLSec, CLASP, SQUARE, SREP) and the proposed method is done and it has demonstrated that the proposed method performs better than existing SRE methods and enabling experts to uncover a broader spectrum of security requirements specific to CPS. Conclusion: The proposed method is also validated using a case study of the healthcare system and the results are promising. The proposed model will provide substantial advantages to both practitioners and researcher, assisting them in identifying the security requirements for CPS in Industry 4.0.
-
Amal Alshahrani;Jenan Mustafa;Manar Almatrafi;Layan Albaqami;Raneem Aljabri;Shahad Almuntashri 53
Alzheimer's disease is a brain disorder that worsens over time and affects millions of people around the world. It leads to a gradual deterioration in memory, thinking ability, and behavioral and social skills until the person loses his ability to adapt to society. Technological progress in medical imaging and the use of artificial intelligence, has provided the possibility of detecting Alzheimer's disease through medical images such as magnetic resonance imaging (MRI). However, Deep learning algorithms, especially convolutional neural networks (CNNs), have shown great success in analyzing medical images for disease diagnosis and classification. Where CNNs can recognize patterns and objects from images, which makes them ideally suited for this study. In this paper, we proposed to compare the performances of Alzheimer's disease detection by using two deep learning methods: You Only Look Once (YOLO), a CNN-enabled object recognition algorithm, and Visual Geometry Group (VGG16) which is a type of deep convolutional neural network primarily used for image classification. We will compare our results using these modern models Instead of using CNN only like the previous research. In addition, the results showed different levels of accuracy for the various versions of YOLO and the VGG16 model. YOLO v5 reached 56.4% accuracy at 50 epochs and 61.5% accuracy at 100 epochs. YOLO v8, which is for classification, reached 84% accuracy overall at 100 epochs. YOLO v9, which is for object detection overall accuracy of 84.6%. The VGG16 model reached 99% accuracy for training after 25 epochs but only 78% accuracy for testing. Hence, the best model overall is YOLO v9, with the highest overall accuracy of 86.1%. -
In recent times cyber attackers can use Artificial Intelligence (AI) to boost the sophistication and scope of attacks. On the defense side, AI is used to enhance defense plans, to boost the robustness, flexibility, and efficiency of defense systems, which means adapting to environmental changes to reduce impacts. With increased developments in the field of information and communication technologies, various exploits occur as a danger sign to cyber security and these exploitations are changing rapidly. Cyber criminals use new, sophisticated tactics to boost their attack speed and size. Consequently, there is a need for more flexible, adaptable and strong cyber defense systems that can identify a wide range of threats in real-time. In recent years, the adoption of AI approaches has increased and maintained a vital role in the detection and prevention of cyber threats. In this paper, an Ensemble Deep Restricted Boltzmann Machine (EDRBM) is developed for the classification of cybersecurity threats in case of a large-scale network environment. The EDRBM acts as a classification model that enables the classification of malicious flowsets from the largescale network. The simulation is conducted to test the efficacy of the proposed EDRBM under various malware attacks. The simulation results show that the proposed method achieves higher classification rate in classifying the malware in the flowsets i.e., malicious flowsets than other methods.
-
Latifah Khalid Alabdulwahhab;Shaik Shakeel Ahamad 73
COVID-19 pandemic outbreak increased the use of Internet of Medical Things (IoMT), but the existing IoMT solutions are not free from attacks. This paper proposes a secure and resilient framework for IoMT, it computes the risk using Risk Impact Parameters (RIP) and Risk is also calculated based upon the Threat Events in the Internet of Medical Things (IoMT). UICC (Universal Integrated Circuit Card) and TPM (Trusted Platform Module) are used to ensure security in IoMT. PILAR Risk Management Tool is used to perform qualitative and quantitative risk analysis. It is designed to support the risk management process along long periods, providing incremental analysis as the safeguards improve. -
The information retrieval domain deals with the retrieval of unstructured data such as text documents. Searching documents is a main component of the modern information retrieval system. Locality Sensitive Hashing (LSH) is one of the most popular methods used in searching for documents in a high-dimensional space. The main benefit of LSH is its theoretical guarantee of query accuracy in a multi-dimensional space. More enhancement can be achieved to LSH by adding a bit to its steps. In this paper, a new Dynamic Locality Sensitive Hashing (DLSH) algorithm is proposed as an improved version of the LSH algorithm, which relies on employing the hierarchal selection of LSH parameters (number of bands, number of shingles, and number of permutation lists) based on the similarity achieved by the algorithm to optimize searching accuracy and increasing its score. Using several tampered file structures, the technique was applied, and the performance is evaluated. In some circumstances, the accuracy of matching with DLSH exceeds 95% with the optimal parameter value selected for the number of bands, the number of shingles, and the number of permutations lists of the DLSH algorithm. The result makes DLSH algorithm suitable to be applied in many critical applications that depend on accurate searching such as forensics technology.
-
Olha Byriuk;Tetiana Stechenko;Nataliya Andronik;Oksana Matsnieva;Larysa Shevtsova 89
The main purpose of the study is to determine the main elements of the use of digital technologies for learning a foreign language in educational institutions. The era of digital technologies is a transition from the traditional format of working with information to a digital format. This is the era of the total domination of digital technologies. Digital technologies have gained an unprecedented rapid and general distribution. In recent years, all spheres of human life have already undergone the intervention of digital technologies. Therefore, it is precisely the educational industry that faces a difficult task - to move to a new level of education, where digital technologies will be actively used, allowing you to conveniently and quickly work in the information field for more effective learning and development. The study has limitations and they relate to the fact that the practical activities of the process of using digital technologies in the system of preparing the study of a foreign language were not taken into account. -
In distributed computing, accessible encryption strategy over Auditable data is a hot research field. Be that as it may, most existing system on encoded look and auditable over outsourced cloud information and disregard customized seek goal. Distributed storage space get to manage is imperative for the security of given information, where information security is executed just for the encoded content. It is a smaller amount secure in light of the fact that the Intruder has been endeavored to separate the scrambled records or Information. To determine this issue we have actualize (CBC) figure piece fastening. It is tied in with adding XOR each plaintext piece to the figure content square that was already delivered. We propose a novel heterogeneous structure to evaluate the issue of single-point execution bottleneck and give a more proficient access control plot with a reviewing component. In the interim, in our plan, a CA (Central Authority) is acquainted with create mystery keys for authenticity confirmed clients. Not at all like other multi specialist get to control plots, each of the experts in our plan deals with the entire trait set independently. Keywords: Cloud storage, Access control, Auditing, CBC.
-
In the quickly developing world, the idea of the conventional advancement needs to advance[1]. When applied to the advanced work environment, the conventional strategies give as much damage as they do great. Tragically, an absence of versatility in the customary techniques has prompted an inflexible work structure that is truly not viable with the present business. The Agile technique is, subsequently, a more adequate practice, in view of creating programming at a faster speed, while as yet looking after proficiency. The coordinated programming improvement strategies are concentrated in this paper. As per study results, Agile software development group needs solid client association; great light-footed task the executives' measures; item proprietor expands business esteem conveyed by group and need and draw in partners; great deft designing procedures or practices; and great advancements and advancement apparatuses[2]. This examination has suggestions for positive social change since associations that comprehend the basic components might have the option to improve project the executives' systems and money saving advantages prompting higher effectiveness, productivity, and efficiency hence profiting the board, representatives, and client. This survey paper incorporates various methodologies of Agile and their analysis.
-
In general network-based intrusion detection system is designed to detect malicious behavior directed at a network or its resources. The key goal of this paper is to look at network data and identify whether it is normal traffic data or anomaly traffic data specifically for accounting information systems. In today's world, there are a variety of principles for detecting various forms of network-based intrusion. In this paper, we are using supervised machine learning techniques. Classification models are used to train and validate data. Using these algorithms we are training the system using a training dataset then we use this trained system to detect intrusion from the testing dataset. In our proposed method, we will detect whether the network data is normal or an anomaly. Using this method we can avoid unauthorized activity on the network and systems under that network. The Decision Tree and K-Nearest Neighbor are applied to the proposed model to classify abnormal to normal behaviors of network traffic data. In addition to that, Logistic Regression Classifier and Support Vector Classification algorithms are used in our model to support proposed concepts. Furthermore, a feature selection method is used to collect valuable information from the dataset to enhance the efficiency of the proposed approach. Random Forest machine learning algorithm is used, which assists the system to identify crucial aspects and focus on them rather than all the features them. The experimental findings revealed that the suggested method for network intrusion detection has a neglected false alarm rate, with the accuracy of the result expected to be between 95% and 100%. As a result of the high precision rate, this concept can be used to detect network data intrusion and prevent vulnerabilities on the network.
-
The moveable ad hoc networks are untrustworthy and susceptible to any intrusion because of their wireless interaction approach. Therefore the information from these networks can be stolen very easily just by introducing the attacker nodes in the system. The straight route extent is calculated with the help of hop count metric. For this purpose, routing protocols are planned. From a number of attacks, the wormhole attack is considered to be the hazardous one. This intrusion is commenced with the help of couple attacker nodes. These nodes make a channel by placing some sensor nodes between transmitter and receiver. The accessible system regards the wormhole intrusions in the absence of intermediary sensor nodes amid target. This mechanism is significant for the areas where the route distance amid transmitter and receiver is two hops merely. This mechanism is not suitable for those scenarios where multi hops are presented amid transmitter and receiver. In the projected study, a new technique is implemented for the recognition and separation of attacker sensor nodes from the network. The wormhole intrusions are triggered with the help of these attacker nodes in the network. The projected scheme is utilized in NS2 and it is depicted by the reproduction outcomes that the projected scheme shows better performance in comparison with existing approaches.
-
S. Sumahasan;Udaya Kumar Addanki;Navya Irlapati;Amulya Jonnala 129
Object Detection is an emerging technology in the field of Computer Vision and Image Processing that deals with detecting objects of a particular class in digital images. It has considered being one of the complicated and challenging tasks in computer vision. Earlier several machine learning-based approaches like SIFT (Scale-invariant feature transform) and HOG (Histogram of oriented gradients) are widely used to classify objects in an image. These approaches use the Support vector machine for classification. The biggest challenges with these approaches are that they are computationally intensive for use in real-time applications, and these methods do not work well with massive datasets. To overcome these challenges, we implemented a Deep Learning based approach Convolutional Neural Network (CNN) in this paper. The Proposed approach provides accurate results in detecting objects in an image by the area of object highlighted in a Bounding Box along with its accuracy. -
Mansoor Alghamdi;Sami Mnasri;Malek Alrashidi;Wajih Abdallah;Thierry Val 135
Urban public health monitoring in smart cities focuses on the control of conditions and health challenges in urban environments. Considering the rapid spread of diseases and pandemics, it is important for health authorities to trace people carrying the virus. In smart cities, this tracing must be interoperable and intelligent, especially in indoor surfaces characterized by small distances between people. Therefore, to fight pandemics, it is necessary to start with the already-existing digital equipment of the Internet of Things, such as connected objects and smartphones. In this study, the developed system is employed to provide a social IoT network and suggest a strategy which allows reliable traceability without threatening the privacy of users. This IoT-based system allows respecting the social distance between persons sharing public services in smart cities without applying smartphone applications or severe confinement. It also permits a return to normal life in case of viral pandemic and ensures the much-desired balance between economy and health. The present study analyses previous proposed social distance systems then, unlike these studies, suggests an intelligent and distributed IoT based strategy for positioning students. Two scenarios of static and dynamic optimization-based placement of Bluetooth Low Energy devices are proposed and an experimental study shows the contribution and complementarity of the introduced contact tracing strategy with the applications on smartphones. -
This research aims to highlight the determinants of the intention to purchase Halal foods. In the conceptual framework we examine the different antecedents that might affect the consumer intention to choose the Halal foods. For this, we evocate the role of constructs attitude towards Halal certification, consumption habits and the subjective norm. Moreover, we attempt to study the role mediating role of the religiosity in this purchase decision process of the Muslim consumer. The empirical study will be implemented in the Saudi Halal Foods Market. Thus, we interviewed 200 invidious in the exploratory study to purify the measurements of the selected constructs that may contribute in the explanation of the intention to consume Halal. The confirmatory phase require a second sample that count 400 interviewees. The software of the data analyses that we have used were SPSS and AMOS to purify measurements, to test the research hypotheses and to validate the developed model. At the end of this research we hope to characterize and define the most important determinants' Muslim purchase intention of Halal foods. Therefore, we advance the necessary recommendations to the academicians interesting in this business field and the practices who enquiry to improve theirs offerings and theirs transactions turnover within this emerging consumption sector.
-
Svitlana Huralna;Nataliia Demianko;Nataliia Sulaieva;Viktoriia Irkliienko;Tetiana Horokhivska 165
The processes of society's informatization and digitalization necessitate the widespread use of new pedagogical technologies. Through these technologies, comprehensive disclosure of didactic functions of new methods of educational activity and the realization of the potential and creative potential. The use of information and computer multimedia technologies in teaching music art is especially relevant in the intensification of the development of interactive technologies, the transition to mixed forms of learning, and a period of socio-economic and sociopolitical upheavals. The study aims to substantiate the theoretical and applied principles of the analysis of multimedia technology learning musical art in modern conditions and assess the status and trends in their use in conducting educational activities. The study uses general scientific and unique methods of economic analysis, in particular, analysis and synthesis, analogy and comparison, generalization and systematization, and graphic ways. Regarding the results of the study of multimedia technologies for teaching musical art in current conditions, it was found that they contribute to the development of the seeker's creative, creative, and cognitive activity, have a positive impact on learning material, and diversify the educational process. Multimedia technologies such as presentations, programs for watching a video, listening to audio, music and singing karaoke, electronic encyclopedias, and Internet resources are proven to be the most used in music education. They have several qualitative and quantitative advantages, manifested in the possibilities of audio-visual presentation of educational material and significantly higher information density. It is suggested to strengthen the use of such computer programs as Microsoft Word, Ahead Nero, Finale, Adobe Audition, Sound Forge, and Microsoft PowerPoint for musical art classes. -
Vitalii Nahornyi;Alona Tiurina;Olha Ruban;Tetiana Khletytska;Vitalii Litvinov 172
Since the beginning of 2015, corporate social responsibility (CSR) models have been changing in connection with the trend towards the transition of joint value creation of corporate activities and consideration of stakeholders' interests. The purpose of the academic paper lies in empirically studying the current practice of social responsibility of transnational corporations (TNCs). The research methodology has combined the method of qualitative analysis, the method of cases of agricultural holdings in emerging markets within the framework of resource theory, institutional theory and stakeholders' theory. The results show that the practice of CSR is integrated into the strategy of sustainable development of TNCs, which determine the methods, techniques and forms of communication, as well as areas of stakeholders' responsibility. The internal practice of CSR is aimed at developing norms and standards of moral behaviour with stakeholders in order to maximize economic and social goals. Economic goals are focused not only on making a profit, but also on minimizing costs due to the potential risks of corruption, fraud, conflict of interest. The system of corporate social responsibility of modern TNCs is clearly regulated by internal documents that define the list of interested parties and stakeholders, their areas of responsibility, greatly simplifying the processes of cooperation and responsibility. As a result, corporations form their own internal institutional environment. Ethical norms help to avoid the risks of opportunistic behaviour of personnel, conflicts of interest, cases of bribery, corruption, and fraud. The theoretical value of the research lies in supplementing the theory of CSR in the context of the importance of a complex, systematic approach to integrating the theory of resources, institutional theory, theory of stakeholders in the development of strategies for sustainable development of TNCs, the practice of corporate governance and social responsibility. -
FANET (Flying ad-hoc network) is a self-adjusting wireless network that enables easy to deploy flying nodes, inexpensive, flexible such as UAV in the absence of fixed network infrastructure they communicate amoung themselves. Past few decades FANET is only the emerging networks with it's huge range of next-generation applications.FANET is a sub-set of MANET's(Mobile Ad-hoc Network) and UAV networks are known as FANET.Routing enables the flying nodes to establish routes to radio access infrastructure specifically FANET and among themselves coordinate and collaborate.This paper presents a review on existing proposed communication architecture and routing protocols for FANETS.In addition open issues and challenges are summarized in tabular form with proposed solution.Our goal is to provide a general idea to the researchers about different topics to be addressed in future.
-
JarBot: Automated Java Libraries Suggestion in JAR Archives Format for a given Software ArchitectureSoftware reuse gives the meaning for rapid software development and the quality of the software. Most of the Java components/libraries open-source are available only in Java Archive (JAR) file format. When a software design enters into the development process, the developer needs to select necessary JAR files manually via analyzing the given software architecture and related JAR files. This paper proposes an automated approach, JarBot, to suggest all the necessary JAR files for given software architecture in the development process. All related JAR files will be downloaded from the internet based on the extracted information from the given software architecture (class diagram). Class names, method names, and attribute names will be extracted from the downloaded JAR files and matched with the information extracted from the given software architecture to identify the most relevant JAR files. For the result and evaluation of the proposed system, 05 software design was developed for 05 well-completed software project from GitHub. The proposed system suggested more than 95% of the JAR files among expected JAR files for the given 05 software design. The result indicated that the proposed system is suggesting almost all the necessary JAR files.
-
Correspondence security between IoT devices is a significant concern, and the blockchain makes the latest difference by reducing this matter. In the blockchain idea, the larger part or even all organization hubs check the legitimacy and precision of traded information before tolerating and recording them, regardless of whether this information is identified with monetary exchanges or estimations of a sensor or a confirmation message. In assessing the legitimacy of a traded information, hubs should agree to play out an uncommon activity. The chance to enter and record exchanges and problematic cooperation with the framework is fundamentally decreased. To share and access the executives of IoT devices data with disseminated demeanour, another confirmation convention dependent on block-chain is proposed, and it is guaranteed that this convention fulfils client protection saving and security. This paper highlights the recent approaches conducted by other researchers to secure the Internet of Things environments using blockchain. These approaches are studied and compared with each other to present their features and disadvantages.
-
The increasing number of botnet attacks incorporating new evasion techniques making it infeasible to completely secure complex computer network system. The botnet infections are likely to be happen, the timely detection and response to these infections helps to stop attackers before any damage is done. The current practice in traditional IP networks require manual intervention to response to any detected malicious infection. This manual response process is more probable to delay and increase the risk of damage. To automate this manual process, this paper proposes to automatically select relevant countermeasures for detected botnet infection. The propose approach uses the concept of flow trace to detect botnet behavior patterns from current and historical network activity. The approach uses the multiclass machine learning based approach to detect and classify the botnet activity into IRC, HTTP, and P2P botnet. This classification helps to calculate the risk score of the detected botnet infection. The relevant countermeasures selected from available pool based on risk score of detected infection.
-
Feature Selection have turned into the main point of investigations particularly in bioinformatics where there are numerous applications. Deep learning technique is a useful asset to choose features, anyway not all calculations are on an equivalent balance with regards to selection of relevant features. To be sure, numerous techniques have been proposed to select multiple features using deep learning techniques. Because of the deep learning, neural systems have profited a gigantic top recovery in the previous couple of years. Anyway neural systems are blackbox models and not many endeavors have been made so as to examine the fundamental procedure. In this proposed work a new calculations so as to do feature selection with deep learning systems is introduced. To evaluate our outcomes, we create relapse and grouping issues which enable us to think about every calculation on various fronts: exhibitions, calculation time and limitations. The outcomes acquired are truly encouraging since we figure out how to accomplish our objective by outperforming irregular backwoods exhibitions for each situation. The results prove that the proposed method exhibits better performance than the traditional methods.
-
Using Support Vector Machine to Predict Political Affiliations on Twitter: Machine Learning approachMuhammad Javed;Kiran Hanif;Arslan Ali Raza;Syeda Maryum Batool;Syed Muhammad Ali Haider 217
The current study aimed to evaluate the effectiveness of using Support Vector Machine (SVM) for political affiliation classification. The system was designed to analyze the political tweets collected from Twitter and classify them as positive, negative, and neutral. The performance analysis of the SVM classifier was based on the calculation of metrics such as accuracy, precision, recall, and f1-score. The results showed that the classifier had high accuracy and f1-score, indicating its effectiveness in classifying the political tweets. The implementation of SVM in this study is based on the principle of Structural Risk Minimization (SRM), which endeavors to identify the maximum margin hyperplane between two classes of data. The results indicate that SVM can be a reliable classification approach for the analysis of political affiliations, possessing the capability to accurately categorize both linear and non-linear information using linear, polynomial or radial basis kernels. This paper provides a comprehensive overview of using SVM for political affiliation analysis and highlights the importance of using accurate classification methods in the field of political analysis.