• 제목/요약/키워드: Distribution Information

Search Result 11,986, Processing Time 0.039 seconds

Parafovea Information Processing of Adults and Adolescents in Reading: Diffusion Model Analysis on Distributions of Eye Fixation Durations (글읽기에서 나타난 성인과 청소년의 중심와주변 정보처리: 고정시간 분포에 대한 확산모형 분석)

  • Choo, Hyeree;Koh, Sungryong
    • Korean Journal of Cognitive Science
    • /
    • v.31 no.4
    • /
    • pp.103-136
    • /
    • 2020
  • This study compares the parafovea preview effect of adolescent group and adult group with different ages using eye tracking experiment. Also, this study confirms that the starting point parameter of the one boundary diffusion model can explain the data obtained through eye tracking experiments. In two experiments, parafoveal information processing was examined using the boundary technique. In Experiment 1, reading times were compared between the conditions given high frequency words preview versus masking preview. In Experiment 2, the condition in which low frequency words were given to parafovea preview information and the condition in which parafovea preview was masked were compared. We found that both the adolescent group and the adult group showed a parafovea preview effect. Also, first fixation, single fixation, and gaze duration of the two groups were different based on the word property shown in the parafovea. The first fixation data obtained in the two experiments were divided into quantiles and fitted into one boundary diffusion model. From the results, we argue that the parafovea preview information processing in the reading was described as the starting point parameter of the one boundary diffusion model.

A Study on the One-Way Distance in the Longitudinal Section Using Probabilistic Theory (확률론적 이론을 이용한 종단면에서의 단방향 이동거리에 관한 연구)

  • Kim, Seong-Ryul;Moon, Ji-Hyun;Jeon, Hae-Sung;Sue, Jong-Chal;Choo, Yeon-Moon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.12
    • /
    • pp.87-96
    • /
    • 2020
  • To use a hydraulic structure effectively, the velocity of a river should be known in detail. In reality, velocity measurements are not conducted sufficiently because of their high cost. The formulae to yield the flux and velocity of the river are commonly called the Manning and Chezy formulae, which are empirical equations applied to uniform flow. This study is based on Chiu (1987)'s paper using entropy theory to solve the limits of the existing velocity formula and distribution and suggests the velocity and distance formula derived from information entropy. The data of a channel having records of a spot's velocity was used to verify the derived formula's utility and showed R2 values of distance and velocity of 0.9993 and 0.8051~0.9483, respectively. The travel distance and velocity of a moving spot following the streamflow were calculated using some flow information, which solves the difficulty in frequent flood measurements when it is needed. This can be used to make a longitudinal section of a river composed of a horizontal distance and elevation. Moreover, GIS makes it possible to obtain accurate information, such as the characteristics of a river. The connection with flow information and GIS model can be used as alarming and expecting flood systems.

A Scheme of Data-driven Procurement and Inventory Management through Synchronizing Production Planning in Aircraft Manufacturing Industry (항공기 제조업에서 생산계획 동기화를 통한 데이터기반 구매조달 및 재고관리 방안 연구)

  • Yu, Kyoung Yul;Choi, Hong Suk;Jeong, Dae Yul
    • The Journal of Information Systems
    • /
    • v.30 no.1
    • /
    • pp.151-177
    • /
    • 2021
  • Purpose This paper aims to improve management performance by effectively responding to production needs and reducing inventory through synchronizing production planning and procurement in the aviation industry. In this study, the differences in production planning and execution were first analyzed in terms of demand, supply, inventory, and process using the big data collected from a domestic aircraft manufacturers. This paper analyzed the problems in procurement and inventory management using legacy big data from ERP system in the company. Based on the analysis, we performed a simulation to derive an efficient procurement and inventory management plan. Through analysis and simulation of operational data, we were able to discover procurement and inventory policies to effectively respond to production needs. Design/methodology/approach This is an empirical study to analyze the cause of decrease in inventory turnover and increase in inventory cost due to dis-synchronize between production requirements and procurement. The actual operation data, a total of 21,306,611 transaction data which are 18 months data from January 2019 to June 2020, were extracted from the ERP system. All them are such as basic information on materials, material consumption and movement history, inventory/receipt/shipment status, and production orders. To perform data analysis, it went through three steps. At first, we identified the current states and problems of production process to grasp the situation of what happened, and secondly, analyzed the data to identify expected problems through cross-link analysis between transactions, and finally, defined what to do. Many analysis techniques such as correlation analysis, moving average analysis, and linear regression analysis were applied to predict the status of inventory. A simulation was performed to analyze the appropriate inventory level according to the control of fluctuations in the production planing. In the simulation, we tested four alternatives how to coordinate the synchronization between the procurement plan and the production plan. All the alternatives give us more plausible results than actual operation in the past. Findings Based on the big data extracted from the ERP system, the relationship between the level of delivery and the distribution of fluctuations was analyzed in terms of demand, supply, inventory, and process. As a result of analyzing the inventory turnover rate, the root cause of the inventory increase were identified. In addition, based on the data on delivery and receipt performance, it was possible to accurately analyze how much gap occurs between supply and demand, and to figure out how much this affects the inventory level. Moreover, we were able to obtain the more predictable and insightful results through simulation that organizational performance such as inventory cost and lead time can be improved by synchronizing the production planning and purchase procurement with supply and demand information. The results of big data analysis and simulation gave us more insights in production planning, procurement, and inventory management for smart manufacturing and performance improvement.

A Study on the Possibility of Blockchain Technology Adoption in the Logistics Industry (물류산업 내 블록체인 기술 도입 가능성 연구)

  • Kye, Dong Min;Hur, Sung Ho
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.2
    • /
    • pp.116-131
    • /
    • 2022
  • With the recent progress of the 4th industrial revolution, the logistics industry is also making efforts to introduce smart logistics, and various attempts are being made to spread logistics informatization, which is the core of smart logistics. Among these, blockchain technology is considered as a technology that will contribute to the spread of logistics informatization and is being applied to various fields. Accordingly, in this study, to discuss the applicability of blockchain technology to the logistics industry, the characteristics of blockchain technology were defined, related cases were reviewed, and a survey was conducted on the possibility of application in the industry. Blockchain technology can be defined as having the characteristics of economic feasibility, speed, transparency in terms of work efficiency, and scalability, decentralization (decentralization), reliability (security) in terms of added value creation. It was confirmed that many are being introduced in the fields of distribution, finance, personal information, and public services. As a result of the survey on the logistics industry, it was confirmed that the level of informatization of the logistics industry had entered the stage of generating profits by using information, but the industry was passive in sharing and utilizing information due to concerns about information leakage. Nevertheless, the awareness and expectation of the need for informatization is high, and it is expected that the informatization of the logistics industry and realizing smart logistics based on it will advance one step further with the introduction of blockchain technology in the future.

Analysis of Plant Height, Crop Cover, and Biomass of Forage Maize Grown on Reclaimed Land Using Unmanned Aerial Vehicle Technology

  • Dongho, Lee;Seunghwan, Go;Jonghwa, Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.47-63
    • /
    • 2023
  • Unmanned aerial vehicle (UAV) and sensor technologies are rapidly developing and being usefully utilized for spatial information-based agricultural management and smart agriculture. Until now, there have been many difficulties in obtaining production information in a timely manner for large-scale agriculture on reclaimed land. However, smart agriculture that utilizes sensors, information technology, and UAV technology and can efficiently manage a large amount of farmland with a small number of people is expected to become more common in the near future. In this study, we evaluated the productivity of forage maize grown on reclaimed land using UAV and sensor-based technologies. This study compared the plant height, vegetation cover ratio, fresh biomass, and dry biomass of maize grown on general farmland and reclaimed land in South Korea. A biomass model was constructed based on plant height, cover ratio, and volume-based biomass using UAV-based images and Farm-Map, and related estimates were obtained. The fresh biomass was estimated with a very precise model (R2 =0.97, root mean square error [RMSE]=3.18 t/ha, normalized RMSE [nRMSE]=8.08%). The estimated dry biomass had a coefficient of determination of 0.86, an RMSE of 1.51 t/ha, and an nRMSE of 12.61%. The average plant height distribution for each field lot was about 0.91 m for reclaimed land and about 1.89 m for general farmland, which was analyzed to be a difference of about 48%. The average proportion of the maize fraction in each field lot was approximately 65% in reclaimed land and 94% in general farmland, showing a difference of about 29%. The average fresh biomass of each reclaimed land field lot was 10 t/ha, which was about 36% lower than that of general farmland (28.1 t/ha). The average dry biomass in each field lot was about 4.22 t/ha in reclaimed land and about 8 t/ha in general farmland, with the reclaimed land having approximately 53% of the dry biomass of the general farmland. Based on these results, UAV and sensor-based images confirmed that it is possible to accurately analyze agricultural information and crop growth conditions in a large area. It is expected that the technology and methods used in this study will be useful for implementing field-smart agriculture in large reclaimed areas.

Seismic Zonation on Site Responses in Daejeon by Building Geotechnical Information System Based on Spatial GIS Framework (공간 GIS 기반의 지반 정보 시스템 구축을 통한 대전 지역의 부지 응답에 따른 지진재해 구역화)

  • Sun, Chang-Guk
    • Journal of the Korean Geotechnical Society
    • /
    • v.25 no.1
    • /
    • pp.5-19
    • /
    • 2009
  • Most of earthquake-induced geotechnical hazards have been caused by the site effects relating to the amplification of ground motion, which is strongly influenced by the local geologic conditions such as soil thickness or bedrock depth and soil stiffness. In this study, an integrated GIS-based information system for geotechnical data, called geotechnical information system (GTIS), was constructed to establish a regional counterplan against earthquake-induced hazards at an urban area of Daejeon, which is represented as a hub of research and development in Korea. To build the GTIS for the area concerned, pre-existing geotechnical data collections were performed across the extended area including the study area and site visits were additionally carried out to acquire surface geo-knowledge data. For practical application of the GTIS used to estimate the site effects at the area concerned, seismic zoning map of the site period was created and presented as regional synthetic strategy for earthquake-induced hazards prediction. In addition, seismic zonation for site classification according to the spatial distribution of the site period was also performed to determine the site amplification coefficients for seismic design and seismic performance evaluation at any site in the study area. Based on this case study on seismic zonations in Daejeon, it was verified that the GIS-based GTIS was very useful for the regional prediction of seismic hazards and also the decision support for seismic hazard mitigation.

A Systematic Review of Trends of Domestic Digital Curation Research (체계적 문헌고찰을 통한 국내 디지털 큐레이션 연구동향 분석)

  • Minseok Park;Jisue Lee
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.24 no.2
    • /
    • pp.41-63
    • /
    • 2024
  • This study investigated research trends in digital curation indexed in a prominent domestic academic information database. A systematic literature review was conducted on 39 academic papers published from 2009 to 2023. The review examined indexing status according to publication year, venue, academic discipline, research area distribution, research affiliation and occupation, and research types. In addition, network centrality analysis and cohesive group analysis were performed on 69 author keywords. The findings revealed several key points. First, digital curation research peaked in 2015 and 2016 with 5 publications each year, followed by a slight decrease, and then consistently produced 4 or more publications annually since 2019. Second, among the 39 studies, 25 were conducted in interdisciplinary fields, including library and information science, while 11 were in the humanities, such as miscellaneous humanities. The most prominent research areas were theoretical and infrastructural aspects, information management and services, and institutional domains. Third, digital curation research was predominantly led by university-affiliated professors and researchers, with collaborative research more prevalent than solo research. Lastly, analysis of author keywords revealed that "digital curation," "institution," and "content" were the most influential central keywords within the overall network.

A Comparative Study of IT Outsourcing Research in Korea and China on Author Bibliographic Coupling Analysis (저자서지결합분석을 통한 한중 IT 아웃소싱 연구 비교)

  • Hyoung Jin Min;Sung Sik Park;Yuchen Jin
    • Information Systems Review
    • /
    • v.22 no.4
    • /
    • pp.1-20
    • /
    • 2020
  • This study uses the bibliometric analysis and author bibliographic coupling analysis (ABCA) to analyse the study of IT outsourcing in Korea and China by 2017 and determine the subject areas of the researcher and the intellectual structure which lays the foundation for future researchers in IT outsourcing area. For this study, through the National Digital Science Library (NDSL) of Korea and China Academic Journal network publishing Database (CAJD) of China, it collected the connected documents and found out authors whose work had been published more than twice. ABCA is utilized to visualize the author map which could find out the researchers and areas in meaningful way. The result show that the study of IT outsourcing in Korea came out earlier and developed further than that of China. The study in Korea has already come to the mature period. By contrast, China remains at somewhere between developing period and bottleneck period. The distribution of papers are still in the dispersed state. The author map shows a hot subject area in Korean researchers IT outsourcing strategy is and in chinese scholars IT outsourcing management.

The Coexistance of Online Communities: An Agent-Based Simulation from an Ecological Perspective (온라인 커뮤니티 간 공존: 생태학적 관점의 에이전트 기반 시뮬레이션)

  • Luyang Han;Jungpil Hahn
    • Information Systems Review
    • /
    • v.19 no.2
    • /
    • pp.115-136
    • /
    • 2017
  • Online communities have become substantial aspects of people's daily lives. However, only a few communities succeed and attract the majority of users, whereas the vast majority struggle for survival. When various communities coexist, important factors should be identified and examined to maintain attraction and achieve success. The concept of coexistence as been extensively explored in organizational ecology literature. However, given the similarities and differences between online communities and traditional organizations, the direct application of organizational theories to online contexts should be cautiously explored. In this study, we follow the roadmap proposed by Davis et al. (2007) in conducting agent-based modeling and simulation study to develop a novel theory based on the previous literature. In the case of two coexisting communities, we find that community size and participation costs can significantly affect the development of a community. A large community can attract a high number of active members who frequently log in. By contrast, low participation costs can encourage the reading and posting behaviors of members. We also observe the important influence of the distribution of interests on the topic trends of communities. A community composed of a population that focuses on only one topic can quickly converge on the topic regardless of whether the initial topic is broad or focused. This simulation model provides theoretical implications to literature and practical guidance to operators of online communities.

Conditional Generative Adversarial Network based Collaborative Filtering Recommendation System (Conditional Generative Adversarial Network(CGAN) 기반 협업 필터링 추천 시스템)

  • Kang, Soyi;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.157-173
    • /
    • 2021
  • With the development of information technology, the amount of available information increases daily. However, having access to so much information makes it difficult for users to easily find the information they seek. Users want a visualized system that reduces information retrieval and learning time, saving them from personally reading and judging all available information. As a result, recommendation systems are an increasingly important technologies that are essential to the business. Collaborative filtering is used in various fields with excellent performance because recommendations are made based on similar user interests and preferences. However, limitations do exist. Sparsity occurs when user-item preference information is insufficient, and is the main limitation of collaborative filtering. The evaluation value of the user item matrix may be distorted by the data depending on the popularity of the product, or there may be new users who have not yet evaluated the value. The lack of historical data to identify consumer preferences is referred to as data sparsity, and various methods have been studied to address these problems. However, most attempts to solve the sparsity problem are not optimal because they can only be applied when additional data such as users' personal information, social networks, or characteristics of items are included. Another problem is that real-world score data are mostly biased to high scores, resulting in severe imbalances. One cause of this imbalance distribution is the purchasing bias, in which only users with high product ratings purchase products, so those with low ratings are less likely to purchase products and thus do not leave negative product reviews. Due to these characteristics, unlike most users' actual preferences, reviews by users who purchase products are more likely to be positive. Therefore, the actual rating data is over-learned in many classes with high incidence due to its biased characteristics, distorting the market. Applying collaborative filtering to these imbalanced data leads to poor recommendation performance due to excessive learning of biased classes. Traditional oversampling techniques to address this problem are likely to cause overfitting because they repeat the same data, which acts as noise in learning, reducing recommendation performance. In addition, pre-processing methods for most existing data imbalance problems are designed and used for binary classes. Binary class imbalance techniques are difficult to apply to multi-class problems because they cannot model multi-class problems, such as objects at cross-class boundaries or objects overlapping multiple classes. To solve this problem, research has been conducted to convert and apply multi-class problems to binary class problems. However, simplification of multi-class problems can cause potential classification errors when combined with the results of classifiers learned from other sub-problems, resulting in loss of important information about relationships beyond the selected items. Therefore, it is necessary to develop more effective methods to address multi-class imbalance problems. We propose a collaborative filtering model using CGAN to generate realistic virtual data to populate the empty user-item matrix. Conditional vector y identify distributions for minority classes and generate data reflecting their characteristics. Collaborative filtering then maximizes the performance of the recommendation system via hyperparameter tuning. This process should improve the accuracy of the model by addressing the sparsity problem of collaborative filtering implementations while mitigating data imbalances arising from real data. Our model has superior recommendation performance over existing oversampling techniques and existing real-world data with data sparsity. SMOTE, Borderline SMOTE, SVM-SMOTE, ADASYN, and GAN were used as comparative models and we demonstrate the highest prediction accuracy on the RMSE and MAE evaluation scales. Through this study, oversampling based on deep learning will be able to further refine the performance of recommendation systems using actual data and be used to build business recommendation systems.