• Title/Summary/Keyword: Complex network analysis

Search Result 679, Processing Time 0.032 seconds

Simulation Analysis of Urban Heat Island Mitigation of Green Area Types in Apartment Complexes (유형별 녹지 시뮬레이션을 통한 아파트 단지 내 도시열섬현상 저감효과 분석)

  • Ji, Eun-Ju;Kim, Da-Been;Kim, Yu-Gyeong;Lee, Jung-A
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.3
    • /
    • pp.153-165
    • /
    • 2023
  • The purpose of this study is to propose effective scenarios for green areas in apartment complexes that can improve the connection between green spaces considering wind flow, thermal comfort, and mitigation of the urban heat island effect. The study site was an apartment complex in Godeok-dong, Gangdong-gu, Seoul, Korea. The site selection was based on comparing temperatures and discomfort index data collected from June to August 2020. Initially, the thermal and wind environment of the current site was analyzed. Based on the findings, three scenarios were proposed, taking into account both green patches and corridor elements: Scenario 1 (green patch), Scenario 2 (green corridor), and Scenario 3 (green patch & corridor). Subsequently, each scenario's wind speed, wind flow, and thermal comfort were analyzed using ENVI-met to compare their effectiveness in mitigating the urban heat island effect. The study results demonstrated that green patches contributed to increased wind speed and improved wind flow, leading to a reduction of 31..20% in the predicted mean vote (PMV) and 68.59% in the predicted percentage of dissatisfied (PET). On the other hand, green corridors facilitated the connection of wind paths and further increased wind speed compared to green patches. They proved to be more effective than green patches in mitigating the urban heat island, resulting in a reduction of 92.47% in PMV and 90.14% in PET. The combination of green patches and green corridors demonstrated the greatest increase in wind speed and strong connectivity within the apartment complex, resulting in a reduction of 95.75% in PMV and 95.35% in PET. However, patches in narrow areas were found to be more effective in improving thermal comfort than green corridors. Therefore, to effectively mitigate the urban heat island effect, enhancing green areas by incorporating green corridors in conjunction with green patches is recommended. This study can serve as fundamental data for planning green areas to mitigate future urban heat island effects in apartment complexes. Additionally, it can be considered a method to improve urban resilience in response to the challenges posed by the urban heat island effect.

Analysis of the Case of Separation of Mixtures Presented in the 2015 Revised Elementary School Science 4th Grade Authorized Textbook and Comparison of the Concept of Separation of Mixtures between Teachers and Students (2015 개정 초등학교 과학과 4학년 검정 교과서에 제시된 혼합물의 분리 사례 분석 및 교사와 학생의 혼합물 개념 비교)

  • Chae, Heein;Noh, Sukgoo
    • Journal of Korean Elementary Science Education
    • /
    • v.43 no.1
    • /
    • pp.122-135
    • /
    • 2024
  • The purpose of this study was to analyze the examples presented in the "Separation of Mixtures" section of the 2015 revised science authorized textbook introduced in elementary schools in 2022 and to see how the teachers and students understand the concept. To do that, 96 keywords were extracted through three cleansing processes to separate the elements of the mixture presented in the textbook. In order to analyze the teachers' perceptions, 32 teachers at elementary schools in Gyeonggi-do received responses to a survey, and a survey of 92 fourth graders who learned the separation of the mixture with an authorized textbook in 2022 was used for the analysis. As a result, as for the solids, 54 out of 96 separations (56.3%) showed the highest ratio, and the largest number of cases were presented according to the characteristics of the development stage of students. It was followed by living things, liquids, other objects and substances, and gasses. By analyzing the mixture, the structure and the interrelationships between the 96 extracted keywords were systematized through the network analysis, and the connection between the keywords, which were a part of the mixture was analyzed. The teachers partially responded to the separation of the complex mixture presented in the textbook, but most of the students did not recognize it. Because the analysis of the teachers' and students' perceptions of the seven separate categories presented in the survey was not based on a clear conceptual perception of the separation of the mixture, but rather they tended to respond differently for each characteristic of each individual category, it was decided that it was necessary to present clearer examples of the separation of the mixture, so that the students could better understand the concept of separation of mixtures that could be somewhat abstract.

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.

Usefulness of Data Mining in Criminal Investigation (데이터 마이닝의 범죄수사 적용 가능성)

  • Kim, Joon-Woo;Sohn, Joong-Kweon;Lee, Sang-Han
    • Journal of forensic and investigative science
    • /
    • v.1 no.2
    • /
    • pp.5-19
    • /
    • 2006
  • Data mining is an information extraction activity to discover hidden facts contained in databases. Using a combination of machine learning, statistical analysis, modeling techniques and database technology, data mining finds patterns and subtle relationships in data and infers rules that allow the prediction of future results. Typical applications include market segmentation, customer profiling, fraud detection, evaluation of retail promotions, and credit risk analysis. Law enforcement agencies deal with mass data to investigate the crime and its amount is increasing due to the development of processing the data by using computer. Now new challenge to discover knowledge in that data is confronted to us. It can be applied in criminal investigation to find offenders by analysis of complex and relational data structures and free texts using their criminal records or statement texts. This study was aimed to evaluate possibile application of data mining and its limitation in practical criminal investigation. Clustering of the criminal cases will be possible in habitual crimes such as fraud and burglary when using data mining to identify the crime pattern. Neural network modelling, one of tools in data mining, can be applied to differentiating suspect's photograph or handwriting with that of convict or criminal profiling. A case study of in practical insurance fraud showed that data mining was useful in organized crimes such as gang, terrorism and money laundering. But the products of data mining in criminal investigation should be cautious for evaluating because data mining just offer a clue instead of conclusion. The legal regulation is needed to control the abuse of law enforcement agencies and to protect personal privacy or human rights.

  • PDF

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

Factors Influencing the Adoption of Location-Based Smartphone Applications: An Application of the Privacy Calculus Model (스마트폰 위치기반 어플리케이션의 이용의도에 영향을 미치는 요인: 프라이버시 계산 모형의 적용)

  • Cha, Hoon S.
    • Asia pacific journal of information systems
    • /
    • v.22 no.4
    • /
    • pp.7-29
    • /
    • 2012
  • Smartphone and its applications (i.e. apps) are increasingly penetrating consumer markets. According to a recent report from Korea Communications Commission, nearly 50% of mobile subscribers in South Korea are smartphone users that accounts for over 25 million people. In particular, the importance of smartphone has risen as a geospatially-aware device that provides various location-based services (LBS) equipped with GPS capability. The popular LBS include map and navigation, traffic and transportation updates, shopping and coupon services, and location-sensitive social network services. Overall, the emerging location-based smartphone apps (LBA) offer significant value by providing greater connectivity, personalization, and information and entertainment in a location-specific context. Conversely, the rapid growth of LBA and their benefits have been accompanied by concerns over the collection and dissemination of individual users' personal information through ongoing tracking of their location, identity, preferences, and social behaviors. The majority of LBA users tend to agree and consent to the LBA provider's terms and privacy policy on use of location data to get the immediate services. This tendency further increases the potential risks of unprotected exposure of personal information and serious invasion and breaches of individual privacy. To address the complex issues surrounding LBA particularly from the user's behavioral perspective, this study applied the privacy calculus model (PCM) to explore the factors that influence the adoption of LBA. According to PCM, consumers are engaged in a dynamic adjustment process in which privacy risks are weighted against benefits of information disclosure. Consistent with the principal notion of PCM, we investigated how individual users make a risk-benefit assessment under which personalized service and locatability act as benefit-side factors and information privacy risks act as a risk-side factor accompanying LBA adoption. In addition, we consider the moderating role of trust on the service providers in the prohibiting effects of privacy risks on user intention to adopt LBA. Further we include perceived ease of use and usefulness as additional constructs to examine whether the technology acceptance model (TAM) can be applied in the context of LBA adoption. The research model with ten (10) hypotheses was tested using data gathered from 98 respondents through a quasi-experimental survey method. During the survey, each participant was asked to navigate the website where the experimental simulation of a LBA allows the participant to purchase time-and-location sensitive discounted tickets for nearby stores. Structural equations modeling using partial least square validated the instrument and the proposed model. The results showed that six (6) out of ten (10) hypotheses were supported. On the subject of the core PCM, H2 (locatability ${\rightarrow}$ intention to use LBA) and H3 (privacy risks ${\rightarrow}$ intention to use LBA) were supported, while H1 (personalization ${\rightarrow}$ intention to use LBA) was not supported. Further, we could not any interaction effects (personalization X privacy risks, H4 & locatability X privacy risks, H5) on the intention to use LBA. In terms of privacy risks and trust, as mentioned above we found the significant negative influence from privacy risks on intention to use (H3), but positive influence from trust, which supported H6 (trust ${\rightarrow}$ intention to use LBA). The moderating effect of trust on the negative relationship between privacy risks and intention to use LBA was tested and confirmed by supporting H7 (privacy risks X trust ${\rightarrow}$ intention to use LBA). The two hypotheses regarding to the TAM, including H8 (perceived ease of use ${\rightarrow}$ perceived usefulness) and H9 (perceived ease of use ${\rightarrow}$ intention to use LBA) were supported; however, H10 (perceived effectiveness ${\rightarrow}$ intention to use LBA) was not supported. Results of this study offer the following key findings and implications. First the application of PCM was found to be a good analysis framework in the context of LBA adoption. Many of the hypotheses in the model were confirmed and the high value of $R^2$ (i.,e., 51%) indicated a good fit of the model. In particular, locatability and privacy risks are found to be the appropriate PCM-based antecedent variables. Second, the existence of moderating effect of trust on service provider suggests that the same marginal change in the level of privacy risks may differentially influence the intention to use LBA. That is, while the privacy risks increasingly become important social issues and will negatively influence the intention to use LBA, it is critical for LBA providers to build consumer trust and confidence to successfully mitigate this negative impact. Lastly, we could not find sufficient evidence that the intention to use LBA is influenced by perceived usefulness, which has been very well supported in most previous TAM research. This may suggest that more future research should examine the validity of applying TAM and further extend or modify it in the context of LBA or other similar smartphone apps.

  • PDF

An Installation and Model Assessment of the UM, U.K. Earth System Model, in a Linux Cluster (U.K. 지구시스템모델 UM의 리눅스 클러스터 설치와 성능 평가)

  • Daeok Youn;Hyunggyu Song;Sungsu Park
    • Journal of the Korean earth science society
    • /
    • v.43 no.6
    • /
    • pp.691-711
    • /
    • 2022
  • The state-of-the-art Earth system model as a virtual Earth is required for studies of current and future climate change or climate crises. This complex numerical model can account for almost all human activities and natural phenomena affecting the atmosphere of Earth. The Unified Model (UM) from the United Kingdom Meteorological Office (UK Met Office) is among the best Earth system models as a scientific tool for studying the atmosphere. However, owing to the expansive numerical integration cost and substantial output size required to maintain the UM, individual research groups have had to rely only on supercomputers. The limitations of computer resources, especially the computer environment being blocked from outside network connections, reduce the efficiency and effectiveness of conducting research using the model, as well as improving the component codes. Therefore, this study has presented detailed guidance for installing a new version of the UM on high-performance parallel computers (Linux clusters) owned by individual researchers, which would help researchers to easily work with the UM. The numerical integration performance of the UM on Linux clusters was also evaluated for two different model resolutions, namely N96L85 (1.875° ×1.25° with 85 vertical levels up to 85 km) and N48L70 (3.75° ×2.5° with 70 vertical levels up to 80 km). The one-month integration times using 256 cores for the AMIP and CMIP simulations of N96L85 resolution were 169 and 205 min, respectively. The one-month integration time for an N48L70 AMIP run using 252 cores was 33 min. Simulated results on 2-m surface temperature and precipitation intensity were compared with ERA5 re-analysis data. The spatial distributions of the simulated results were qualitatively compared to those of ERA5 in terms of spatial distribution, despite the quantitative differences caused by different resolutions and atmosphere-ocean coupling. In conclusion, this study has confirmed that UM can be successfully installed and used in high-performance Linux clusters.

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

Color Analyses on Digital Photos Using Machine Learning and KSCA - Focusing on Korean Natural Daytime/nighttime Scenery - (머신러닝과 KSCA를 활용한 디지털 사진의 색 분석 -한국 자연 풍경 낮과 밤 사진을 중심으로-)

  • Gwon, Huieun;KOO, Ja Joon
    • Trans-
    • /
    • v.12
    • /
    • pp.51-79
    • /
    • 2022
  • This study investigates the methods for deriving colors which can serve as a reference to users such as designers and or contents creators who search for online images from the web portal sites using specific words for color planning and more. Two experiments were conducted in order to accomplish this. Digital scenery photos within the geographic scope of Korea were downloaded from web portal sites, and those photos were studied to find out what colors were used to describe daytime and nighttime. Machine learning was used as the study methodology to classify colors in daytime and nighttime, and KSCA was used to derive the color frequency of daytime and nighttime photos and to compare and analyze the two results. The results of classifying the colors of daytime and nighttime photos using machine learning show that, when classifying the colors by 51~100%, the area of daytime colors was approximately 2.45 times greater than that of nighttime colors. The colors of the daytime class were distributed by brightness with white as its center, while that of the nighttime class was distributed with black as its center. Colors that accounted for over 70% of the daytime class were 647, those over 70% of the nighttime class were 252, and the rest (31-69%) were 101. The number of colors in the middle area was low, while other colors were classified relatively clearly into day and night. The resulting color distributions in the daytime and nighttime classes were able to provide the borderline color values of the two classes that are classified by brightness. As a result of analyzing the frequency of digital photos using KSCA, colors around yellow were expressed in generally bright daytime photos, while colors around blue value were expressed in dark night photos. For frequency of daytime photos, colors on the upper 40% had low chroma, almost being achromatic. Also, colors that are close to white and black showed the highest frequency, indicating a large difference in brightness. Meanwhile, for colors with frequency from top 5 to 10, yellow green was expressed darkly, and navy blue was expressed brightly, partially composing a complex harmony. When examining the color band, various colors, brightness, and chroma including light blue, achromatic colors, and warm colors were shown, failing to compose a generally harmonious arrangement of colors. For the frequency of nighttime photos, colors in approximately the upper 50% are dark colors with a brightness value of 2 (Munsell signal). In comparison, the brightness of middle frequency (50-80%) is relatively higher (brightness values of 3-4), and the brightness difference of various colors was large in the lower 20%. Colors that are not cool colors could be found intermittently in the lower 8% of frequency. When examining the color band, there was a general harmonious arrangement of colors centered on navy blue. As the results of conducting the experiment using two methods in this study, machine learning could classify colors into two or more classes, and could evaluate how close an image was with certain colors to a certain class. This method cannot be used if an image cannot be classified into a certain class. The result of such color distribution would serve as a reference when determining how close a certain color is to one of the two classes when the color is used as a dominant color in the base or background color of a certain design. Also, when dividing the analyzed images into several classes, even colors that have not been used in the analyzed image can be determined to find out how close they are to a certain class according to the color distribution properties of each class. Nevertheless, the results cannot be used to find out whether a specific color was used in the class and by how much it was used. To investigate such an issue, frequency analysis was conducted using KSCA. The color frequency could be measured within the range of images used in the experiment. The resulting values of color distribution and frequency from this study would serve as references for color planning of digital design regarding natural scenery in the geographic scope of Korea. Also, the two experiments are meaningful attempts for searching the methods for deriving colors that can be a useful reference among numerous images for content creator users of the relevant field.