Volume 23 Issue 8
-
The artificial intelligence of robot is the weakness of digital intelligence of a person who able to train, self-realize and to develop competences, creative, professional and behavioral skills. A new methodology proposed for managing robots inside the mines using an electronic system designed for driving robots to injured people in seas, mines or wells who can not be reached by human force. This paper also explains the concept of managing and remote-controlling the process of searching and helping the injured. The user controls the robot through an application that receives all the reports that the robot sends from the injured person. The robot's tasks are to take a sample of the blood of the injured person, examine it, and measure the percentage of oxygen underground and send it to the user who directs the robot to pump a specific percentage of oxygen to the injured person. The user can also communicate with the person The patient and determine his condition through the camera connected to the robot equipped with headphones to communicate with the injured and the user can direct the camera of the robot and take x-rays from the injured.
-
Speech can actively elicit feelings and attitudes by using words. It is important for researchers to identify the emotional content contained in speech signals as well as the sort of emotion that resulted from the speech that was made. In this study, we studied the emotion recognition system using a database in Arabic, especially in the Saudi dialect, the database is from a YouTube channel called Telfaz11, The four emotions that were examined were anger, happiness, sadness, and neutral. In our experiments, we extracted features from audio signals, such as Mel Frequency Cepstral Coefficient (MFCC) and Zero-Crossing Rate (ZCR), then we classified emotions using many classification algorithms such as machine learning algorithms (Support Vector Machine (SVM) and K-Nearest Neighbor (KNN)) and deep learning algorithms such as (Convolution Neural Network (CNN) and Long Short-Term Memory (LSTM)). Our Experiments showed that the MFCC feature extraction method and CNN model obtained the best accuracy result with 95%, proving the effectiveness of this classification system in recognizing Arabic spoken emotions.
-
The alarming global prevalence of Type 2 Diabetes Mellitus (T2DM) has catalyzed an urgent need for robust, early diagnostic methodologies. This study unveils a pioneering approach to predicting T2DM, employing the Extreme Gradient Boosting (XGBoost) algorithm, renowned for its predictive accuracy and computational efficiency. The investigation harnesses a meticulously curated dataset of 4303 samples, extracted from a comprehensive Chinese research study, scrupulously aligned with the World Health Organization's indicators and standards. The dataset encapsulates a multifaceted spectrum of clinical, demographic, and lifestyle attributes. Through an intricate process of hyperparameter optimization, the XGBoost model exhibited an unparalleled best score, elucidating a distinctive combination of parameters such as a learning rate of 0.1, max depth of 3, 150 estimators, and specific colsample strategies. The model's validation accuracy of 0.957, coupled with a sensitivity of 0.9898 and specificity of 0.8897, underlines its robustness in classifying T2DM. A detailed analysis of the confusion matrix further substantiated the model's diagnostic prowess, with an F1-score of 0.9308, illustrating its balanced performance in true positive and negative classifications. The precision and recall metrics provided nuanced insights into the model's ability to minimize false predictions, thereby enhancing its clinical applicability. The research findings not only underline the remarkable efficacy of XGBoost in T2DM prediction but also contribute to the burgeoning field of machine learning applications in personalized healthcare. By elucidating a novel paradigm that accentuates the synergistic integration of multifaceted clinical parameters, this study fosters a promising avenue for precise early detection, risk stratification, and patient-centric intervention in diabetes care. The research serves as a beacon, inspiring further exploration and innovation in leveraging advanced analytical techniques for transformative impacts on predictive diagnostics and chronic disease management.
-
Hajj is a fundamental pillar of Islam that all Muslims must perform at least once in their lives. However, Umrah can be performed several times yearly, depending on people's abilities. Every year, Muslims from all over the world travel to Saudi Arabia to perform Hajj. Hajj and Umrah pilgrims face multiple issues due to the large volume of people at the same time and place during the event. Therefore, a system is needed to facilitate the people's smooth execution of Hajj and Umrah procedures. Multiple devices are already installed in Makkah, but it would be better to suggest the data architectures with the help of machine learning approaches. The proposed system analyzes the services provided to the pilgrims regarding gender, location, and foreign pilgrims. The proposed system addressed the research problem of analyzing the Hajj pilgrim dataset most effectively. In addition, Visualizations of the proposed method showed the system's performance using data architectures. Machine learning algorithms classify whether male pilgrims are more significant than female pilgrims. Several algorithms were proposed to classify the data, including logistic regression, Naive Bayes, K-nearest neighbors, decision trees, random forests, and XGBoost. The decision tree accuracy value was 62.83%, whereas K-nearest Neighbors had 62.86%; other classifiers have lower accuracy than these. The open-source dataset was analyzed using different data architectures to store the data, and then machine learning approaches were used to classify the dataset.
-
The global surge in depression and anxiety, intensified by challenges such as cost and stigma, emphasizes the pressing need for accessible, evidence-based digital solutions. The research centers on the creation of a mobile application specifically designed to address mental health challenges. By integrating cognitive behavioral therapy techniques and features like appointment bookings and mindfulness feedback tools, the app is positioned to improve user outcomes. Utilizing platforms like React Native and React, combined with NestJS for enhanced backend security, the application adheres to the rigorous standards required for mental health interventions. Collaborative efforts with experts, notably the counseling unit of IIUM, ensure the app's alignment with contemporary best practices and research. Preliminary findings indicate a promising tool with the potential to address the global mental health treatment disparity.
-
Speech Emotions recognition has become the active research theme in speech processing and in applications based on human-machine interaction. In this work, our system is a two-stage approach, namely feature extraction and classification engine. Firstly, two sets of feature are investigated which are: the first one is extracting only 13 Mel-frequency Cepstral Coefficient (MFCC) from emotional speech samples and the second one is applying features fusions between the three features: Zero Crossing Rate (ZCR), Teager Energy Operator (TEO), and Harmonic to Noise Rate (HNR) and MFCC features. Secondly, we use two types of classification techniques which are: the Support Vector Machines (SVM) and the k-Nearest Neighbor (k-NN) to show the performance between them. Besides that, we investigate the importance of the recent advances in machine learning including the deep kernel learning. A large set of experiments are conducted on Surrey Audio-Visual Expressed Emotion (SAVEE) dataset for seven emotions. The results of our experiments showed given good accuracy compared with the previous studies.
-
Asad Amin;Muhammad Nauman Durrani;Nadeem Kafi;Fahad Samad;Abdul Aziz 49
There has been a rapid increase in the creation and alteration of new malware samples which is a huge financial risk for many organizations. There is a huge demand for improvement in classification and detection mechanisms available today, as some of the old strategies like classification using mac learning algorithms were proved to be useful but cannot perform well in the scalable auto feature extraction scenario. To overcome this there must be a mechanism to automatically analyze malware based on the automatic feature extraction process. For this purpose, the dynamic analysis of real malware executable files has been done to extract useful features like API call sequence and opcode sequence. The use of different hashing techniques has been analyzed to further generate images and convert them into image representable form which will allow us to use more advanced classification approaches to classify huge amounts of images using deep learning approaches. The use of deep learning algorithms like convolutional neural networks enables the classification of malware by converting it into images. These images when fed into the CNN after being converted into the grayscale image will perform comparatively well in case of dynamic changes in malware code as image samples will be changed by few pixels when classified based on a greyscale image. In this work, we used VGG-16 architecture of CNN for experimentation. -
Identity and access management in cloud computing is one of the leading significant issues that require various security countermeasures to preserve user privacy. An authentication mechanism is a leading solution to authenticate and verify the identities of cloud users while accessing cloud applications. Building a secured and flexible authentication mechanism in a cloud computing platform is challenging. Authentication techniques can be combined with other security techniques such as intrusion detection systems to maintain a verifiable layer of security. In this paper, we provide an interactive, flexible, and reliable multi-factor authentication mechanisms that are primarily based on a proposed Authentication Method Selector (AMS) technique. The basic idea of AMS is to rely on the user's previous authentication information and user behavior which can be embedded with additional authentication methods according to the organization's requirements. In AMS, the administrator has the ability to add the appropriate authentication method based on the requirements of the organization. Based on these requirements, the administrator will activate and initialize the authentication method that has been added to the authentication pool. An intrusion detection component has been added to apply the users' location and users' default web browser feature. The AMS and intrusion detection components provide a security enhancement to increase the accuracy and efficiency of cloud user identity verification.
-
The utilization of GPUs on general-purpose computers is currently on the rise due to the increase in its programmability and performance requirements. The utility of tools like NVIDIA's CUDA have been designed to allow programmers to code algorithms by using C-like language for the execution process on the graphics processing units GPU. Unfortunately, many of the performance and correctness bugs will happen on parallel programs. The CUDA tool support for the parallel programs has not yet been actualized. The use of a dynamic analyzer to find performance and correctness bugs in CUDA programs facilitates the execution of sophisticated processes, especially in modern computing requirements. Any race conditions bug it will impact of program correctness and the share memory bank conflicts to improve the overall performance. The technique instruments the programs in a way that promotes accessibility of the memory locations accessed by different threads well as to check for any bugs in the code of a program. The instrumented source code will be used initiated directly in the device emulation code of CUDA to send report for the user about all errors. The current degree of automation helps programmers solve subtle bugs in highly complex programs or programs that cannot be analyzed manually.
-
Thi-Hau Nguyen;Ha-Nam Nguyen;Dang-Nhac Lu;Duc-Nhan Nguyen 85
The Ant Colony System (ACS) is a variant of Ant colony optimization algorithm which is well-known in Traveling Salesman Problem. This paper proposed a hybrid method based on genetic algorithm (GA) and ant colony system (ACS), called GACS, to solve traffic routing problem. In the GACS, we use genetic algorithm to optimize the ACS parameters that aims to attain the shortest trips and time through new functions to help the ants to update global and local pheromones. Our experiments are performed by the GACS framework which is developed from VANETsim with the ability of real map loading from open street map project, and updating traffic light in real-time. The obtained results show that our framework acquired higher performance than A-Star and classical ACS algorithms in terms of length of the best global tour and the time for trip. -
According to most experts and health workers, a living creature's body heat is little understood and crucial in the identification of disorders. Doctors in ancient medicine used wet mud or slurry clay to heal patients. When either of these progressed throughout the body, the area that dried up first was called the infected part. Today, thermal cameras that generate images with electromagnetic frequencies can be used to accomplish this. Thermography can detect swelling and clot areas that predict cancer without the need for harmful radiation and irritational touch. It has a significant benefit in medical testing because it can be utilized before any observable symptoms appear. In this work, machine learning (ML) is defined as statistical approaches that enable software systems to learn from data without having to be explicitly coded. By taking note of these heat scans of breasts and pinpointing suspected places where a doctor needs to conduct additional investigation, ML can assist in this endeavor. Thermal imaging is a more cost-effective alternative to other approaches that require specialized equipment, allowing machines to deliver a more convenient and effective approach to doctors.
-
Md. Ashikuzzaman;Wasim Akram;Md. Mydul Islam Anik;Taskeed Jabid;Mahamudul Hasan;Md. Sawkat Ali 95
Due to Traffic accidents people faces health and economical casualties around the world. As the population increases vehicles on road increase which leads to congestion in cities. Congestion can lead to increasing accident risks due to the expansion in transportation systems. Modern cities are adopting various technologies to minimize traffic accidents by predicting mathematically. Traffic accidents cause economical casualties and potential death. Therefore, to ensure people's safety, the concept of the smart city makes sense. In a smart city, traffic accident factors like road condition, light condition, weather condition etcetera are important to consider to predict traffic accident severity. Several machine learning models can significantly be employed to determine and predict traffic accident severity. This research paper illustrated the performance of a hybridized neural network and compared it with other machine learning models in order to measure the accuracy of predicting traffic accident severity. Dataset of city Leeds, UK is being used to train and test the model. Then the results are being compared with each other. Particle Swarm optimization with artificial neural network (PSO-ANN) gave promising results compared to other machine learning models like Random Forest, Naïve Bayes, Nearest Centroid, K Nearest Neighbor Classification. PSO- ANN model can be adopted in the transportation system to counter traffic accident issues. The nearest centroid model gave the lowest accuracy score whereas PSO-ANN gave the highest accuracy score. All the test results and findings obtained in our study can provide valuable information on reducing traffic accidents. -
In this busy world actually stress is continuously grow up in research and monitoring social websites. The social interaction is a process by which people act and react in relation with each other like play, fight, dance we can find social interactions. In this we find social structure means maintain the relationships among peoples and group of peoples. Its a limit and depends on its behavior. Because relationships established on expectations of every one involve depending on social network. There is lot of difference between emotional pain and physical pain. When you feel stress on physical body we all feel with tensions, stress on physical consequences, physical effects on our health. When we work on social network websites, developments or any research related information retrieving etc. our brain is going into stress. Actually by social network interactions like watching movies, online shopping, online marketing, online business here we observe sentiment analysis of movie reviews and feedback of customers either positive/negative. In movies there we can observe peoples reaction with each other it depends on actions in film like fights, dances, dialogues, content. Here we can analysis of stress on brain different actions of movie reviews. All these movie review analysis and stress on brain can calculated by machine learning techniques. Actually in target oriented business, the persons who are working in marketing always their brain in stress condition their emotional conditions are different at different times. In this paper how does brain deal with stress management. In software industries when developers are work at home, connected with clients in online work they gone under stress. And their emotional levels and stress levels always changes regarding work communication. In this paper we represent emotional intelligence with stress based analysis using machine learning techniques in social networks. It is ability of the person to be aware on your own emotions or feeling as well as feelings or emotions of the others use this awareness to manage self and your relationships. social interactions is not only about you its about every one can interacting and their expectations too. It about maintaining performance. Performance is sociological understanding how people can interact and a key to know analysis of social interactions. It is always to maintain successful interactions and inline expectations. That is to satisfy the audience. So people careful to control all of these and maintain impression management.
-
M.Ahmad Nawaz Ul Ghani;Taimour Nazar;Syed Zeeshan Hussain Shah Gellani;Zaman Ashraf 107
World is moving towards digitization at a rapid pace, so the enterprises have developed information systems for management of their business. Empowering educational institutes with information systems are become very important and vital. Doing everything manually is very difficult for students, teachers and staff. Information system can enhance their efficiency and save a lot of time; this research proposed system will solve this issue by providing services like class room reservation, e-library facility, online submission etc. in a secured environment. Up till now limited attention has been paid to utilize robots and drones for automation inside educational institutes. Our proposed system incorporates robots and drones to fill this gap in automation being used in institutes. Through this research, the aim is to improve the efficiency of learning and services in educational institutions or universities. -
One of the most prevalent disease among women that leads to death is breast cancer. It can be diagnosed by classifying tumors. There are two different types of tumors i.e: malignant and benign tumors. Physicians need a reliable diagnosis procedure to distinguish between these tumors. However, generally it is very difficult to distinguish tumors even by the experts. Thus, automation of diagnostic system is needed for diagnosing tumors. This paper attempts to improve the accuracy of breast cancer detection by utilizing deep learning convolutional neural network (CNN). Experiments are conducted using Wisconsin Diagnostic Breast Cancer (WDBC) dataset. Compared to existing techniques, the used of CNN shows a better result and achieves 99.66%% in term of accuracy.
-
Wireless Sensor Networks (WSNs) have many potential applications and unique challenges. Some problems of WSNs are: severe resources' constraints, low reliability and fault tolerant, low throughput, low scalability, low Quality of Service (QoS) and insecure operational environments. One significant solution against mentioned problems is hierarchical and clustering-based multipath routing. But, existent algorithms have many weaknesses such as: high overhead, security vulnerabilities, address-centric, low-scalability, permanent usage of optimal paths and severe resources' consumption. As a result, this paper is proposed an energy-aware, congestion-aware, location-based, data-centric, scalable, hierarchical and clustering-based multipath routing algorithm based on Numerical Taxonomy technique for homogenous WSNs. Finally, performance of the proposed algorithm has been compared with performance of LEACH routing algorithm; results of simulations and statistical-mathematical analysis are showing the proposed algorithm has been improved in terms of parameters like balanced resources' consumption such as energy and bandwidth, throughput, reliability and fault tolerant, accuracy, QoS such as average rate of packet delivery and WSNs' lifetime.
-
Hussain Saleem;Khalid Bin Muhammad;Altaf H. Nizamani;Samina Saleem;M. Khawaja Shaiq Uddin;Syed Habib-ur-Rehman;Amin Lalani;Ali Muhammad Aslam 137
E-Commerce is a buzzword well known for electronic commerce activities including but not limited to the online shopping, digital payment transactions, and B2B online trading. In today's digital age, e-commerce has been playing a very important and vital role in areas such as retail shopping, sales automation, supply chain management, marketing and advertisement, and payment services. With a huge amount of data been collected from various e-commerce services available, there are multiple opportunities to use that data to analyze graphs and trends. Strategize profitable activities, and forecast future trade. This paper explains a contemporary approach for collecting key data metrics and implementing cost-effective automation that will support in improving conversion rates and sales performance of the e-commerce websites resulting in increased profitability. -
Syed M. Ali Kamal;Nadeem Kafi;Fahad Samad;Hassan Jamil Syed;Muhammad Nauman Durrani 146
Smart City is gaining attention with the advancement of Information and Communication Technology (ICT). ICT provides the basis for smart city foundation; enables us to interconnect all the actors of a smart city by supporting the provision of seamless ubiquitous services and Internet of Things. On the other hand, Crowdsourcing has the ability to enable citizens to participate in social and economic development of the city and share their contribution and knowledge while increasing their socio-economic welfare. This paper proposed a hybrid model which is a compound of human computation, machine computation and citizen crowds. This proposed hybrid model uses knowledge-based crowdsourcing that captures collaborative and collective intelligence from the citizen crowds to form democratic knowledge space, which provision solutions in areas of civic innovations. This paper also proposed knowledge-based crowdsourcing framework which manages knowledge activities in the form of human computation tasks and eliminates the complexity of human computation task creation, execution, refinement, quality control and manage knowledge space. The knowledge activities in the form of human computation tasks provide support to existing crowdsourcing system to align their task execution order optimally. -
Objective: • To detect black hole and warm hole attacks in wireless sensor networks. • To give a solution for energy depletion and security breach in wireless sensor networks. • To address the security problem using strategic decision support system. Methods: The proposed stackelberg game is used to make the spirited relations between multi leaders and multi followers. In this game, all cluster heads are acts as leaders, whereas agent nodes are acts as followers. The game is initially modeled as Quadratic Programming and also use backtracking search optimization algorithm for getting threshold value to determine the optimal strategies of both defender and attacker. Findings: To find optimal payoffs of multi leaders and multi followers are based on their utility functions. The attacks are easily detected based on some defined rules and optimum results of the game. Finally, the simulations are executed in matlab and the impacts of detection of black hole and warm hole attacks are also presented in this paper. Novelty: The novelty of this study is to considering the stackelberg game with backtracking search optimization algorithm (BSOA). BSOA is based on iterative process which tries to minimize the objective function. Thus we obtain the better optimization results than the earlier approaches.
-
The use of sensors and actuators as a form of controlling cyber-physical systems in resource networks has been integrated and referred to as the Internet of Things (IoT). However, the connectivity of many stand-alone IoT systems through the Internet introduces numerous security challenges as sensitive information is prone to be exposed to malicious users. In this paper, IoT based-security issues ontology is proposed to collect, examine, analyze, prepare, acquire and preserve evidence of IoT security issues challenges. Ontology development has consists three main steps, 1) domain, purpose and scope setting, 2) important terms acquisition, classes and class hierarchy conceptualization and 3) instances creation. Ontology congruent to this paper is method that will help to better understanding and defining terms of IoT based-security issue ontology. Our proposed IoT based-security issue ontology resulting from the protégé has a total of 44 classes and 43 subclasses.
-
Malware detection is an increasingly important operational focus in cyber security, particularly given the fast pace of such threats (e.g., new malware variants introduced every day). There has been great interest in exploring the use of machine learning techniques in automating and enhancing the effectiveness of malware detection and analysis. In this paper, we present a deep recurrent neural network solution as a stacked Long Short-Term Memory (LSTM) with a pre-training as a regularization method to avoid random network initialization. In our proposal, we use global and short dependencies of the inputs. With pre-training, we avoid random initialization and are able to improve the accuracy and robustness of malware threat hunting. The proposed method speeds up the convergence (in comparison to stacked LSTM) by reducing the length of malware OpCode or bytecode sequences. Hence, the complexity of our final method is reduced. This leads to better accuracy, higher Mattews Correlation Coefficients (MCC), and Area Under the Curve (AUC) in comparison to a standard LSTM with similar detection time. Our proposed method can be applied in real-time malware threat hunting, particularly for safety critical systems such as eHealth or Internet of Military of Things where poor convergence of the model could lead to catastrophic consequences. We evaluate the effectiveness of our proposed method on Windows, Ransomware, Internet of Things (IoT), and Android malware datasets using both static and dynamic analysis. For the IoT malware detection, we also present a comparative summary of the performance on an IoT-specific dataset of our proposed method and the standard stacked LSTM method. More specifically, of our proposed method achieves an accuracy of 99.1% in detecting IoT malware samples, with AUC of 0.985, and MCC of 0.95; thus, outperforming standard LSTM based methods in these key metrics.
-
To enhance customer satisfaction for higher profits, an e-commerce sector can establish a continuous relationship and acquire new customers. Utilize machine-learning models to analyse their customer's behavioural evidence to produce their competitive advantage to the e-commerce platform by helping to improve overall satisfaction. These models will forecast customers who will churn and churn causes. Forecasts are used to build unique business strategies and services offers. This work is intended to develop a machine-learning model that can accurately forecast retainable customers of the entire e-commerce customer data. Developing predictive models classifying different imbalanced data effectively is a major challenge in collected data and machine learning algorithms. Build a machine learning model for solving class imbalance and forecast customers. The satisfaction accuracy is used for this research as evaluation metrics. This paper aims to enable to evaluate the use of different machine learning models utilized to forecast satisfaction. For this research paper are selected three analytical methods come from various classifications of learning. Classifier Selection, the efficiency of various classifiers like Random Forest, Logistic Regression, SVM, and Gradient Boosting Algorithm. Models have been used for a dataset of 8000 records of e-commerce websites and apps. Results indicate the best accuracy in determining satisfaction class with both gradient-boosting algorithm classifications. The results showed maximum accuracy compared to other algorithms, including Gradient Boosting Algorithm, Support Vector Machine Algorithm, Random Forest Algorithm, and logistic regression Algorithm. The best model developed for this paper to forecast satisfaction customers and accuracy achieve 88 %.
-
Influential trend that widely reflected the software engineering industry is service oriented architecture. Vendors are migrating towards cloud environment to benefit their organization. Companies usually offer products and services with a goal to solve problems at customer end. Because customers are more interested in solution of their problem rather than focusing on products or services. In software industry the approach in which customers' problems are solved by providing services is known as software as a service. However, software development life cycle encounters enormous changes when migrating software from product model to service model. Enough research has been done on the overall development process but a limited work has been done on the factors that influence requirements elicitation process. This paper focuses on those changes that influence requirement elicitation process and proposes a systematic methodology for transformation of software from product to service model in a successful manner. The paper then elaborates the benefits that inherently come along with elicitation process in cloud environment. The paper also describes the problems during transformation. The paper concludes that requirement engineering process turn out to be more profitable after transformation of traditional software from product to service model.
-
Kamran Ali Memon;Khalid Husain Mohmadani ;Saleemullah Memon;Muhammad Abbas;Noor ul Ain 204
Dynamic Bandwidth Allocation (DBA) methods in telecommunication network & systems have emerged with mechanisms for sharing limited resources in a rapidly growing number of users in today's access networks. Since the DBA research trends are incredibly fast-changing literature where almost every day new areas and terms continue to emerge. Co - citation analysis offers a significant support to researchers to distinguish intellectual bases and potentially leading edges of a specific field. We present the visualization based analysis for DBA algorithms in telecommunication field using mainstream co-citation analysis tool-CiteSpace and web of science (WoS) analysis. Research records for the period of decade (2009-2018) for this analysis are sought from WoS. The visualization results identify the most influential DBA algorithms research studies, journals, major countries, institutions, and researchers, and indicate the intellectual bases and focus entirely on DBA algorithms in the literature, offering guidance to interested researchers on more study of DBA algorithms. -
Muhammad Umer Farooq;Mustafa Latif;Waseem;Mirza Adnan Baig;Muhammad Ali Akhtar;Nuzhat Sana 210
Demand prediction is an essential component of any business or supply chain. Large retailers need to keep track of tens of millions of items flows each day to ensure smooth operations and strong margins. The demand prediction is in the epicenter of this planning tornado. For business processes in retail companies that deal with a variety of products with short shelf life and foodstuffs, forecast accuracy is of the utmost importance due to the shifting demand pattern, which is impacted by an environment of dynamic and fast response. All sectors strive to produce the ideal quantity of goods at the ideal time, but for retailers, this issue is especially crucial as they also need to effectively manage perishable inventories. In light of this, this research aims to show how Machine Learning approaches can help with demand forecasting in retail and future sales predictions. This will be done in two steps. One by using historic data and another by using open data of weather conditions, fuel, Consumer Price Index (CPI), holidays, any specific events in that area etc. Several machine learning algorithms were applied and compared using the r-squared and mean absolute percentage error (MAPE) assessment metrics. The suggested method improves the effectiveness and quality of feature selection while using a small number of well-chosen features to increase demand prediction accuracy. The model is tested with a one-year weekly dataset after being trained with a two-year weekly dataset. The results show that the suggested expanded feature selection approach provides a very good MAPE range, a very respectable and encouraging value for anticipating retail demand in retail systems.