• Title/Summary/Keyword: Web data

Search Result 5,605, Processing Time 0.03 seconds

A Brief Survey of the Uses of Non-Fungible Tokens

  • Zain Patras;Sidra Minhas
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.59-62
    • /
    • 2024
  • Non-fungible tokens (NFTs) are interchangeable rights to digital assets such as art, in-game items, collectibles or music, etc. NFTs have the potential to be infinitely useful in many industries by increasing security and processing costs for transactions and providing a new platform for the gigeconomy to work through. Its markets have grown fast and significantly since early 2021. We investigate the uses of NFTs and research the facts and figures on the usage of NFTs supporting websites. Using daily data between 2019 to 2021. NFTs took the world by storm in 2021, bringing forth a digital art revolution while becoming one of the fastest-growing asset classes of the year. While the NFTs market has been growing at a rapid pace, many are still wary of entering it because of the theoretical insanity around its worth. NFTs have been out there for quite some time and this trend doesn't plan to go any further. NFTs services have many practical use cases and their potential will only grow over time. While celebrities dive into this marketplace to maintain their onlinepresence andincrease their Net worth.

Optimized Multi Agent Personalized Search Engine

  • DishaVerma;Barjesh Kochar;Y. S. Shishodia
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.9
    • /
    • pp.150-156
    • /
    • 2024
  • With the advent of personalized search engines, a myriad of approaches came into practice. With social media emergence the personalization was extended to different level. The main reason for this preference of personalized engine over traditional search was need of accurate and precise results. Due to paucity of time and patience users didn't want to surf several pages to find the result that suits them most. Personalized search engines could solve this problem effectively by understanding user through profiles and histories and thus diminishing uncertainty and ambiguity. But since several layers of personalization were added to basic search, the response time and resource requirement (for profile storage) increased manifold. So it's time to focus on optimizing the layered architectures of personalization. The paper presents a layout of the multi agent based personalized search engine that works on histories and profiles. Further to store the huge amount of data, distributed database is used at its core, so high availability, scaling, and geographic distribution are built in and easy to use. Initially results are retrieved using traditional search engine, after applying layer of personalization the results are provided to user. MongoDB is used to store profiles in flexible form thus improving the performance of the engine. Further Weighted Sum model is used to rank the pages in personalization layer.

Diagnosis and Treatment of Myelodysplastic Syndrome in the Era of Genetic Testing (유전자 검사 시대 골수형성이상증후군의 진단과 치료)

  • Junshik Hong
    • The Korean Journal of Medicine
    • /
    • v.99 no.1
    • /
    • pp.11-16
    • /
    • 2024
  • Myelodysplastic syndrome (MDS) is a heterogeneous disorder with diverse prognoses influenced by cytopenias, genetic variants, and myeloblast proportions in the bone marrow. Accurate prognosis prediction and tailored treatment plans are essential. The International Prognostic Scoring System-Molecular (IPSS-M), which additionally reflects the impact of MDS-related genetic mutations to the clinical and laboratory information, is anticipated to offer superior prognostic accuracy compared to existing systems like the Revised International Prognostic Scoring System (IPSS-R). Despite its statistical complexity, its web-based calculation and ease of discussing results with patients using intuitive data sets provide notable advantages. Progress in MDS treatment, exemplified by effective anemia correction with an erythropoiesis-maturation agent in SF3B1-mutated cases and efforts to refine poor prognoses in TP53-mutated cases, reflects the evolving landscape of genetic-based interventions in MDS. Advancements in genetic diagnostic technology, combined with enhanced knowledge of the bone marrow niche, are anticipated to lead to significant improvement in MDS treatment outcomes in the future.

GAIN-QoS: A Novel QoS Prediction Model for Edge Computing

  • Jiwon Choi;Jaewook Lee;Duksan Ryu;Suntae Kim;Jongmoon Baik
    • Journal of Web Engineering
    • /
    • v.21 no.1
    • /
    • pp.27-52
    • /
    • 2021
  • With recent increases in the number of network-connected devices, the number of edge computing services that provide similar functions has increased. Therefore, it is important to recommend an optimal edge computing service, based on quality-of-service (QoS). However, in the real world, there is a cold-start problem in QoS data: highly sparse invocation. Therefore, it is difficult to recommend a suitable service to the user. Deep learning techniques were applied to address this problem, or context information was used to extract deep features between users and services. However, edge computing environment has not been considered in previous studies. Our goal is to predict the QoS values in real edge computing environments with improved accuracy. To this end, we propose a GAIN-QoS technique. It clusters services based on their location information, calculates the distance between services and users in each cluster, and brings the QoS values of users within a certain distance. We apply a Generative Adversarial Imputation Nets (GAIN) model and perform QoS prediction based on this reconstructed user service invocation matrix. When the density is low, GAIN-QoS shows superior performance to other techniques. In addition, the distance between the service and user slightly affects performance. Thus, compared to other methods, the proposed method can significantly improve the accuracy of QoS prediction for edge computing, which suffers from cold-start problem.

Olfactory Stimulation by Fennel (Foeniculum vulgare Mill.) Essential Oil Improves Lipid Metabolism and Metabolic Disorders in High Fat-Induced Obese Rats

  • Seong Jun Hong;Sojeong Yoon;Seong Min Jo;Hyangyeon Jeong;Moon Yeon Youn;Young Jun Kim;Jae Kyeom Kim;Eui-Cheol Shin
    • Journal of Web Engineering
    • /
    • v.14 no.4
    • /
    • pp.741-755
    • /
    • 2022
  • In this study, odor components were analyzed using gas chromatography/mass spectrometry (GC/MS) and solid-phase microextraction (SPME), and odor-active compounds (OACs) were identified using GC-olfactometry (GC-O). Among the volatile compounds identified through GC-O, p-anisaldehyde, limonene, estragole, anethole, and trans-anethole elicit the fennel odor. In particular, trans-anethole showed the highest odor intensity and content. Changes in body weight during the experimental period showed decreasing values of fennel essential oil (FEO)-inhaled groups, with both body fat and visceral fat showing decreased levels. An improvement in the body's lipid metabolism was observed, as indicated by the increased levels of cholesterol and triglycerides and decreased levels of insulin in the FEO-inhaled groups compared to group H. Furthermore, the reduction in systolic blood pressure and pulse through the inhalation of FEO was confirmed. Our results indicated that FEO inhalation affected certain lipid metabolisms and cardiovascular health, which are obesity-related dysfunction indicators. Accordingly, this study can provide basic research data for further research as to protective applications of FEO, as well as their volatile profiles.

Hybrid CTC-Attention Network-Based End-to-End Speech Recognition System for Korean Language

  • Hosung Park;Changmin Kim;Hyunsoo Son;Soonshin Seo;Ji-Hwan Kim
    • Journal of Web Engineering
    • /
    • v.21 no.2
    • /
    • pp.265-284
    • /
    • 2021
  • In this study, an automatic end-to-end speech recognition system based on hybrid CTC-attention network for Korean language is proposed. Deep neural network/hidden Markov model (DNN/HMM)-based speech recognition system has driven dramatic improvement in this area. However, it is difficult for non-experts to develop speech recognition for new applications. End-to-end approaches have simplified speech recognition system into a single-network architecture. These approaches can develop speech recognition system that does not require expert knowledge. In this paper, we propose hybrid CTC-attention network as end-to-end speech recognition model for Korean language. This model effectively utilizes a CTC objective function during attention model training. This approach improves the performance in terms of speech recognition accuracy as well as training speed. In most languages, end-to-end speech recognition uses characters as output labels. However, for Korean, character-based end-to-end speech recognition is not an efficient approach because Korean language has 11,172 possible numbers of characters. The number is relatively large compared to other languages. For example, English has 26 characters, and Japanese has 50 characters. To address this problem, we utilize Korean 49 graphemes as output labels. Experimental result shows 10.02% character error rate (CER) when 740 hours of Korean training data are used.

Nasal vaccine as a booster shot: a viable solution to restrict pandemic?

  • Sarasa Meenakshi;V .Udaya Kumar;Sameer Dhingra;Krishna Murti
    • Clinical and Experimental Vaccine Research
    • /
    • v.11 no.2
    • /
    • pp.184-192
    • /
    • 2022
  • The coronavirus disease 2019 (COVID-19) pandemic revolutionized the vaccine market and initiated the momentum for alternative routes of administration for vaccines. The intranasal route of immunization is one such possibility that appears to be the most promising since it has some significant advantages, particularly in the prevention of respiratory infection. To analyze and summarize the role of nasal vaccines over conventional vaccines during COVID-19 and the need for the nasal vaccine as a booster shot. In this narrative review, the required data was retrieved using keywords "COVID-19," "Intranasal," "Immunity," "Nasal spray," and "Mucosal" in databases including PubMed, Scopus, Embase, Science Direct, and Web of Sciences. The results of the study showed that the nasal vaccines were both effective and protective according to the current researches approaching during the COVID-19 period and the preclinical and clinical phase trials prove the intranasal vaccination elicits more robust and cross-protective immunity than conventional vaccines. In this narrative review article, mechanisms across the nasal mucosa will be briefly presented and the current status of nasal vaccines during the COVID-19 pandemic is summarized, and advantages over traditional vaccines are provided. Furthermore, after exploring the primary benefits and kinetics of nasal vaccine, the potential for consideration of nasal vaccine as a booster dose is also discussed.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Dynamic Virtual Ontology using Tags with Semantic Relationship on Social-web to Support Effective Search (효율적 자원 탐색을 위한 소셜 웹 태그들을 이용한 동적 가상 온톨로지 생성 연구)

  • Lee, Hyun Jung;Sohn, Mye
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.19-33
    • /
    • 2013
  • In this research, a proposed Dynamic Virtual Ontology using Tags (DyVOT) supports dynamic search of resources depending on user's requirements using tags from social web driven resources. It is general that the tags are defined by annotations of a series of described words by social users who usually tags social information resources such as web-page, images, u-tube, videos, etc. Therefore, tags are characterized and mirrored by information resources. Therefore, it is possible for tags as meta-data to match into some resources. Consequently, we can extract semantic relationships between tags owing to the dependency of relationships between tags as representatives of resources. However, to do this, there is limitation because there are allophonic synonym and homonym among tags that are usually marked by a series of words. Thus, research related to folksonomies using tags have been applied to classification of words by semantic-based allophonic synonym. In addition, some research are focusing on clustering and/or classification of resources by semantic-based relationships among tags. In spite of, there also is limitation of these research because these are focusing on semantic-based hyper/hypo relationships or clustering among tags without consideration of conceptual associative relationships between classified or clustered groups. It makes difficulty to effective searching resources depending on user requirements. In this research, the proposed DyVOT uses tags and constructs ontologyfor effective search. We assumed that tags are extracted from user requirements, which are used to construct multi sub-ontology as combinations of tags that are composed of a part of the tags or all. In addition, the proposed DyVOT constructs ontology which is based on hierarchical and associative relationships among tags for effective search of a solution. The ontology is composed of static- and dynamic-ontology. The static-ontology defines semantic-based hierarchical hyper/hypo relationships among tags as in (http://semanticcloud.sandra-siegel.de/) with a tree structure. From the static-ontology, the DyVOT extracts multi sub-ontology using multi sub-tag which are constructed by parts of tags. Finally, sub-ontology are constructed by hierarchy paths which contain the sub-tag. To create dynamic-ontology by the proposed DyVOT, it is necessary to define associative relationships among multi sub-ontology that are extracted from hierarchical relationships of static-ontology. The associative relationship is defined by shared resources between tags which are linked by multi sub-ontology. The association is measured by the degree of shared resources that are allocated into the tags of sub-ontology. If the value of association is larger than threshold value, then associative relationship among tags is newly created. The associative relationships are used to merge and construct new hierarchy the multi sub-ontology. To construct dynamic-ontology, it is essential to defined new class which is linked by two more sub-ontology, which is generated by merged tags which are highly associative by proving using shared resources. Thereby, the class is applied to generate new hierarchy with extracted multi sub-ontology to create a dynamic-ontology. The new class is settle down on the ontology. So, the newly created class needs to be belong to the dynamic-ontology. So, the class used to new hyper/hypo hierarchy relationship between the class and tags which are linked to multi sub-ontology. At last, DyVOT is developed by newly defined associative relationships which are extracted from hierarchical relationships among tags. Resources are matched into the DyVOT which narrows down search boundary and shrinks the search paths. Finally, we can create the DyVOT using the newly defined associative relationships. While static data catalog (Dean and Ghemawat, 2004; 2008) statically searches resources depending on user requirements, the proposed DyVOT dynamically searches resources using multi sub-ontology by parallel processing. In this light, the DyVOT supports improvement of correctness and agility of search and decreasing of search effort by reduction of search path.

Development of a Web Based Diligence and Indolence Management System (웹 기반 근태관리 시스템 개발)

  • Cho, Sung-Mok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.9
    • /
    • pp.1845-1850
    • /
    • 2009
  • Generally, small and medium scale enterprises have conventionally been performing diligence and indolence management by hand, but many of them have been recently costing a lot of money for their diligence and indolence management and security maintenance. But yet, they have annoying sides due to the initial stage cost for the introduction of the system which is consisted of a terminal for reading a card, an RFID card, an administrative sewer and an application program for the diligence and indolence management as well as the insufficiency of the fixing skill being able to cope with the problems originating from hardware and software troubles. For this reasons, we developed a new diligence and indolence management system that the initial stage cost is moderate because it is needless to purchase a new server and to issue a new card, and the operation and management of the system is convenient because an RFID card reader communicates with a central administrative server in IDC(Internet Data Center) over internet for the diligence and indolence management.