• Title/Summary/Keyword: 웹사용성

Search Result 2,298, Processing Time 0.028 seconds

Development of Quality Evaluation and Management System for Assembled Temporary Equipment - Focused on Steel Pipe Scaffolding, System Scaffolding and Support - (조립 가설기자재 품질평가 및 관리 시스템 개발 - 강관 비계, 시스템 비계, 시스템 동바리를 중심으로 -)

  • Jang, Ji young;Lee, Ji yeon;Kim, Ha yoon;Lee, Jun ho;Kim, Jun-Sang;Kim, Jung-Yeol;Kim, Young Suk
    • Korean Journal of Construction Engineering and Management
    • /
    • v.23 no.5
    • /
    • pp.43-55
    • /
    • 2022
  • Since assembled temporary equipment is widely used for construction work that should be carried out before this construction begins, it is essential to secure quality during assembly and prevent safety accidents caused by assembled temporary equipment after installation. However, it was investigated that most construction site managers are not aware of its importance, such as recognizing the quality management of assembled temporary equipment as a task of managing temporary structures that are dismantled after installation for this construction. The quality management work of assembled temporary equipment at the construction site is carried out in different ways for each construction site because there is no formalized procedure and the subject of performing. In addition, it is analyzed that the manager of the general construction company inspects and reflects the parts that need to be inspected without evidence, so transparency is not guaranteed and the result leads to a serious disaster. Therefore, the purpose of this study is to establish a document preparation-oriented system that provides systematic quality evaluation and management procedures for securing the quality of assembled temporary equipment, develops a checklist for quality evaluation and management, and supports history management on the web.

A Blockchain Network Construction Tool and its Electronic Voting Application Case (블록체인 자동화도구 개발과 전자투표 적용사례)

  • AING TECKCHUN;KONG VUNGSOVANREACH;Okki Kim;Kyung-Hee Lee;Wan-Sup Cho
    • The Journal of Bigdata
    • /
    • v.6 no.2
    • /
    • pp.151-159
    • /
    • 2021
  • Construction of a blockchain network needs a cumbersome and time consuming activity. To overcome these limitations, global IT companies such as Microsoft are providing cloud-based blockchain services. In this paper, we propose a blockchain-based construction and management tool that enables blockchain developers, blockchain operators, and enterprises to deploy blockchain more comfortably in their infrastructure. This tool is implemented using Hyperledger Fabric, one of the famous private blockchain platforms, and Ansible, an open-source IT automation engine that supports network-wide deployment. Instead of complex and repetitive text commands, the tool provides a user-friendly web dashboard interface that allows users to seamlessly set up, deploy and interact with a blockchain network. With this proposed solution, blockchain developers, operators, and blockchain researchers can more easily build blockchain infrastructure, saving time and cost. To verify the usefulness and convenience of the proposed tool, a blockchain network that conducts electronic voting was built and tested. The construction of a blockchain network, which consists of writing more than 10 setting files and executing commands over hundreds of lines, can be replaced with simple input and click operations in the graphical user interface, saving user convenience and time. The proposed blockchain tool will be used to build trust data infrastructure in various fields such as food safety supply chain construction in the future.

Building robust Korean speech recognition model by fine-tuning large pretrained model (대형 사전훈련 모델의 파인튜닝을 통한 강건한 한국어 음성인식 모델 구축)

  • Changhan Oh;Cheongbin Kim;Kiyoung Park
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.75-82
    • /
    • 2023
  • Automatic speech recognition (ASR) has been revolutionized with deep learning-based approaches, among which self-supervised learning methods have proven to be particularly effective. In this study, we aim to enhance the performance of OpenAI's Whisper model, a multilingual ASR system on the Korean language. Whisper was pretrained on a large corpus (around 680,000 hours) of web speech data and has demonstrated strong recognition performance for major languages. However, it faces challenges in recognizing languages such as Korean, which is not major language while training. We address this issue by fine-tuning the Whisper model with an additional dataset comprising about 1,000 hours of Korean speech. We also compare its performance against a Transformer model that was trained from scratch using the same dataset. Our results indicate that fine-tuning the Whisper model significantly improved its Korean speech recognition capabilities in terms of character error rate (CER). Specifically, the performance improved with increasing model size. However, the Whisper model's performance on English deteriorated post fine-tuning, emphasizing the need for further research to develop robust multilingual models. Our study demonstrates the potential of utilizing a fine-tuned Whisper model for Korean ASR applications. Future work will focus on multilingual recognition and optimization for real-time inference.

Generation of Time-Series Data for Multisource Satellite Imagery through Automated Satellite Image Collection (자동 위성영상 수집을 통한 다종 위성영상의 시계열 데이터 생성)

  • Yunji Nam;Sungwoo Jung;Taejung Kim;Sooahm Rhee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_4
    • /
    • pp.1085-1095
    • /
    • 2023
  • Time-series data generated from satellite data are crucial resources for change detection and monitoring across various fields. Existing research in time-series data generation primarily relies on single-image analysis to maintain data uniformity, with ongoing efforts to enhance spatial and temporal resolutions by utilizing diverse image sources. Despite the emphasized significance of time-series data, there is a notable absence of automated data collection and preprocessing for research purposes. In this paper, to address this limitation, we propose a system that automates the collection of satellite information in user-specified areas to generate time-series data. This research aims to collect data from various satellite sources in a specific region and convert them into time-series data, developing an automatic satellite image collection system for this purpose. By utilizing this system, users can collect and extract data for their specific regions of interest, making the data immediately usable. Experimental results have shown the feasibility of automatically acquiring freely available Landsat and Sentinel images from the web and incorporating manually inputted high-resolution satellite images. Comparisons between automatically collected and edited images based on high-resolution satellite data demonstrated minimal discrepancies, with no significant errors in the generated output.

Analysis of media trends related to spent nuclear fuel treatment technology using text mining techniques (텍스트마이닝 기법을 활용한 사용후핵연료 건식처리기술 관련 언론 동향 분석)

  • Jeong, Ji-Song;Kim, Ho-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.2
    • /
    • pp.33-54
    • /
    • 2021
  • With the fourth industrial revolution and the arrival of the New Normal era due to Corona, the importance of Non-contact technologies such as artificial intelligence and big data research has been increasing. Convergent research is being conducted in earnest to keep up with these research trends, but not many studies have been conducted in the area of nuclear research using artificial intelligence and big data-related technologies such as natural language processing and text mining analysis. This study was conducted to confirm the applicability of data science analysis techniques to the field of nuclear research. Furthermore, the study of identifying trends in nuclear spent fuel recognition is critical in terms of being able to determine directions to nuclear industry policies and respond in advance to changes in industrial policies. For those reasons, this study conducted a media trend analysis of pyroprocessing, a spent nuclear fuel treatment technology. We objectively analyze changes in media perception of spent nuclear fuel dry treatment techniques by applying text mining analysis techniques. Text data specializing in Naver's web news articles, including the keywords "Pyroprocessing" and "Sodium Cooled Reactor," were collected through Python code to identify changes in perception over time. The analysis period was set from 2007 to 2020, when the first article was published, and detailed and multi-layered analysis of text data was carried out through analysis methods such as word cloud writing based on frequency analysis, TF-IDF and degree centrality calculation. Analysis of the frequency of the keyword showed that there was a change in media perception of spent nuclear fuel dry treatment technology in the mid-2010s, which was influenced by the Gyeongju earthquake in 2016 and the implementation of the new government's energy conversion policy in 2017. Therefore, trend analysis was conducted based on the corresponding time period, and word frequency analysis, TF-IDF, degree centrality values, and semantic network graphs were derived. Studies show that before the 2010s, media perception of spent nuclear fuel dry treatment technology was diplomatic and positive. However, over time, the frequency of keywords such as "safety", "reexamination", "disposal", and "disassembly" has increased, indicating that the sustainability of spent nuclear fuel dry treatment technology is being seriously considered. It was confirmed that social awareness also changed as spent nuclear fuel dry treatment technology, which was recognized as a political and diplomatic technology, became ambiguous due to changes in domestic policy. This means that domestic policy changes such as nuclear power policy have a greater impact on media perceptions than issues of "spent nuclear fuel processing technology" itself. This seems to be because nuclear policy is a socially more discussed and public-friendly topic than spent nuclear fuel. Therefore, in order to improve social awareness of spent nuclear fuel processing technology, it would be necessary to provide sufficient information about this, and linking it to nuclear policy issues would also be a good idea. In addition, the study highlighted the importance of social science research in nuclear power. It is necessary to apply the social sciences sector widely to the nuclear engineering sector, and considering national policy changes, we could confirm that the nuclear industry would be sustainable. However, this study has limitations that it has applied big data analysis methods only to detailed research areas such as "Pyroprocessing," a spent nuclear fuel dry processing technology. Furthermore, there was no clear basis for the cause of the change in social perception, and only news articles were analyzed to determine social perception. Considering future comments, it is expected that more reliable results will be produced and efficiently used in the field of nuclear policy research if a media trend analysis study on nuclear power is conducted. Recently, the development of uncontact-related technologies such as artificial intelligence and big data research is accelerating in the wake of the recent arrival of the New Normal era caused by corona. Convergence research is being conducted in earnest in various research fields to follow these research trends, but not many studies have been conducted in the nuclear field with artificial intelligence and big data-related technologies such as natural language processing and text mining analysis. The academic significance of this study is that it was possible to confirm the applicability of data science analysis technology in the field of nuclear research. Furthermore, due to the impact of current government energy policies such as nuclear power plant reductions, re-evaluation of spent fuel treatment technology research is undertaken, and key keyword analysis in the field can contribute to future research orientation. It is important to consider the views of others outside, not just the safety technology and engineering integrity of nuclear power, and further reconsider whether it is appropriate to discuss nuclear engineering technology internally. In addition, if multidisciplinary research on nuclear power is carried out, reasonable alternatives can be prepared to maintain the nuclear industry.

Managing Duplicate Memberships of Websites : An Approach of Social Network Analysis (웹사이트 중복회원 관리 : 소셜 네트워크 분석 접근)

  • Kang, Eun-Young;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.153-169
    • /
    • 2011
  • Today using Internet environment is considered absolutely essential for establishing corporate marketing strategy. Companies have promoted their products and services through various ways of on-line marketing activities such as providing gifts and points to customers in exchange for participating in events, which is based on customers' membership data. Since companies can use these membership data to enhance their marketing efforts through various data analysis, appropriate website membership management may play an important role in increasing the effectiveness of on-line marketing campaign. Despite the growing interests in proper membership management, however, there have been difficulties in identifying inappropriate members who can weaken on-line marketing effectiveness. In on-line environment, customers tend to not reveal themselves clearly compared to off-line market. Customers who have malicious intent are able to create duplicate IDs by using others' names illegally or faking login information during joining membership. Since the duplicate members are likely to intercept gifts and points that should be sent to appropriate customers who deserve them, this can result in ineffective marketing efforts. Considering that the number of website members and its related marketing costs are significantly increasing, it is necessary for companies to find efficient ways to screen and exclude unfavorable troublemakers who are duplicate members. With this motivation, this study proposes an approach for managing duplicate membership based on the social network analysis and verifies its effectiveness using membership data gathered from real websites. A social network is a social structure made up of actors called nodes, which are tied by one or more specific types of interdependency. Social networks represent the relationship between the nodes and show the direction and strength of the relationship. Various analytical techniques have been proposed based on the social relationships, such as centrality analysis, structural holes analysis, structural equivalents analysis, and so on. Component analysis, one of the social network analysis techniques, deals with the sub-networks that form meaningful information in the group connection. We propose a method for managing duplicate memberships using component analysis. The procedure is as follows. First step is to identify membership attributes that will be used for analyzing relationship patterns among memberships. Membership attributes include ID, telephone number, address, posting time, IP address, and so on. Second step is to compose social matrices based on the identified membership attributes and aggregate the values of each social matrix into a combined social matrix. The combined social matrix represents how strong pairs of nodes are connected together. When a pair of nodes is strongly connected, we expect that those nodes are likely to be duplicate memberships. The combined social matrix is transformed into a binary matrix with '0' or '1' of cell values using a relationship criterion that determines whether the membership is duplicate or not. Third step is to conduct a component analysis for the combined social matrix in order to identify component nodes and isolated nodes. Fourth, identify the number of real memberships and calculate the reliability of website membership based on the component analysis results. The proposed procedure was applied to three real websites operated by a pharmaceutical company. The empirical results showed that the proposed method was superior to the traditional database approach using simple address comparison. In conclusion, this study is expected to shed some light on how social network analysis can enhance a reliable on-line marketing performance by efficiently and effectively identifying duplicate memberships of websites.

A Knowledge Management System for Supporting Development of the Next Generation Information Appliances (차세대 정보가전 신제품 개발 지원을 위한 지식관리시스템 개발)

  • Park, Ji-Soo;Baek, Dong-Hyun
    • Information Systems Review
    • /
    • v.6 no.2
    • /
    • pp.137-159
    • /
    • 2004
  • The next generation information appliances are those that can be connected with other appliances through a wired or wireless network in order to make it possible for them to transmit and receive data between them and to be remotely controlled from inside or outside of the home. Many electronic companies have aggressively invested in developing new information appliances to take the initiative in upcoming home networking era. They require systematic methods for developing new information appliances and sharing the knowledge acquired from the methods. This paper stored the knowledge acquired from developing the information appliances and developed a knowledge management system that supports the companies to use the knowledge and develop their own information appliances. In order to acquire the knowledge, this paper applied two methods for User-Centered Design in stead of using the general ones for knowledge acquisition. This paper suggested new product ideas by analyzing and observing user actions and stored the knowledge in knowledge bases, which included Knowledge from Analyzing User Actions and Knowledge from Observing User Actions. Seven new product ideas, suggested from the User-Centered Design, were made into design mockups and their videos were produced to show the real situations where they would be used in home of the future, which were stored in the knowledge base of Knowledge from Producing New Emotive Life Videos. Finally, data on present development states of future homes in Europe and Japan and newspapers articles from domestic newspapers were collected and stored in the knowledge base of Knowledge from Surveying Technology Developments. This paper developed a web-based knowledge management system that supports the companies to use the acquired knowledge. Knowledge users can get the knowledge required for developing new information appliances and suggest their own product ideas by using the knowledge management system. This will make the results from this research not confined to a case study of product development but extended to playing a role of facilitating the development of the next generation information appliances.

Design of Translator for generating Secure Java Bytecode from Thread code of Multithreaded Models (다중스레드 모델의 스레드 코드를 안전한 자바 바이트코드로 변환하기 위한 번역기 설계)

  • 김기태;유원희
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2002.06a
    • /
    • pp.148-155
    • /
    • 2002
  • Multithreaded models improve the efficiency of parallel systems by combining inner parallelism, asynchronous data availability and the locality of von Neumann model. This model executes thread code which is generated by compiler and of which quality is given by the method of generation. But multithreaded models have the demerit that execution model is restricted to a specific platform. On the contrary, Java has the platform independency, so if we can translate from threads code to Java bytecode, we can use the advantages of multithreaded models in many platforms. Java executes Java bytecode which is intermediate language format for Java virtual machine. Java bytecode plays a role of an intermediate language in translator and Java virtual machine work as back-end in translator. But, Java bytecode which is translated from multithreaded models have the demerit that it is not secure. This paper, multhithread code whose feature of platform independent can execute in java virtual machine. We design and implement translator which translate from thread code of multithreaded code to Java bytecode and which check secure problems from Java bytecode.

  • PDF

Revision of Nutrition Quotient for Korean adults: NQ-2021 (한국 성인을 위한 영양지수 개정: NQ-2021)

  • Yook, Sung-Min;Lim, Young-Suk;Lee, Jung-Sug;Kim, Ki-Nam;Hwang, Hyo-Jeong;Kwon, Sehyug;Hwang, Ji-Yun;Kim, Hye-Young
    • Journal of Nutrition and Health
    • /
    • v.55 no.2
    • /
    • pp.278-295
    • /
    • 2022
  • Purpose: This study was undertaken to revise and update the Nutrition Quotient (NQ) for Korean adults, a tool used to evaluate dietary quality and behavior. Methods: The first 31 items of the measurable food behavior checklist were adopted based on considerations of the previous NQ checklist, recent literature reviews, national nutrition policies, and recommendations. A pilot survey was conducted on 100 adults aged 19 to 64 residing in Seoul and Gyeonggi Province from March to April 2021 using a provisional 26- item checklist. Pilot survey data were analyzed using factor analysis and frequency analysis to determine whether checklist items were well organized and responses to questions were well distributed, respectively. As a result, the number of items on the food behavior checklist was reduced to 23 for the nationwide survey, which was administered to 1,000 adults (470 men and 530 women) aged 19 to 64 from May to August 2021. The construct validity of the developed NQ (NQ-2021) was assessed using confirmatory factor analysis, linear structural relations. Results: Eighteen items in 3 categories, that is, balance (8 items), moderation (6 items), and practice (4 items), were finally included in NQ-2021 food behavior checklist. 'Balance' items addressed the intake frequencies of essential foods, 'moderation' items the frequencies of unhealthy food intakes or behaviors, and 'practice' items addressed eating behaviors. Items and categories were weighted using standardized path coefficients to calculate NQ-2021 scores. Conclusion: The updated NQ-2021 appears to be suitable for easily and quickly assessing the diet qualities and behaviors of Korean adults.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.