• Title/Summary/Keyword: What-if-not 방법

Search Result 130, Processing Time 0.028 seconds

Right-Turn Vehicle Supplementary Signal Improvement at Intersections (교차로 우회전 차량 보조등 개선)

  • LEE, Nam Soo;KIM, Yu Chan;LIM, Joon Beom;KIM, Youngchan
    • Journal of Korean Society of Transportation
    • /
    • v.33 no.5
    • /
    • pp.441-448
    • /
    • 2015
  • This study aims to suggest a reasonable signal operation method for right-turn traffic management. It was found that the right-turn vehicle supplementary signal is currently operated without clear regulations or criteria. It was also analyzed that right-turn supplementary signals are used without consistency, there is a risk of traffic accidents due to the discordance between supplementary signals and traffic signals of forward vehicles, there is a lack of basis for prohibition of a right turn when right-turn vehicle's supplementary signal is red and the flashing red signal is used in a different sense from the law. In order to see the effect of the installed right-turn vehicle supplementary signals on traffic signal violation, a field investigation was conducted. As the result, there was a high proportion of signal violation on the approach lane with right-turn supplementary signals and this means that right-turn supplementary signals hardly influenced the reduction in proportion of signal violation during a right turn. Additionally, a survey was carried out to see if there were differences in driver's interpretation of traffic signals depending on the installation of right-turn supplementary signals. As the result of the survey, there were no differences in interpretation of traffic signals depending on the installation of right-turn supplementary signals or the types of right-turn supplementary signals. A right turn when the signal was red did not lead to serious traffic accidents, so it is thought that there should be a careful consideration of a total ban on a right turn when the signal is red, in order to prevent driver's confusion due to the change of the signal system. Unless there is a disturbance to cars and pedestrians after a temporary stop when the signal is red, there is a need to specify that vehicles must stop temporarily in the Road Traffic Act to facilitate a right turn. What this study finally suggested is to use tri-colored arrow signals for right-turn car supplementary signals to convey a signal to a driver clearly.

Privacy protection of seizure and search system (압수수색과 개인정보 보호의 문제)

  • Kim, Woon-Gon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.5
    • /
    • pp.123-131
    • /
    • 2015
  • Bright development of information communication is caused by usabilities and another case to our society. That is, the surveillance which is unlimited to electronic equipment is becoming a transfiguration to a possible society, and there is case that was able to lay in another disasters if manage early error. Be what is living on at traps of surveillance through the Smart phones which a door of domicile is built, and the plane western part chaps, and we who live on in these societies are installed to several places, and closed-circuit cameras (CCTV-Closed Circuit Television) and individual use. On one hand, while the asset value which was special of enterprise for marketing to enterprise became while a collection was easily stored development of information communication and individual information, the early body which would collect illegally was increased, and affair actually very occurred related to this. An investigation agency is endeavored to be considered the digital trace that inquiry is happened by commission act to the how small extent which can take aim at a duty successful of the inquiry whether you can detect in this information society in order to look this up. Therefore, procedures to be essential now became while investigating affair that confiscation search regarding employment trace of a computer or the telephone which delinquent used was procedural, and decisive element became that dividing did success or failure of inquiry whether you can collect the act and deed which was these electronic enemy. By the way, at this time a lot of, in the investigation agencies the case which is performed comprehensively blooms attachment while rummaging, and attachment is trend apprehension to infringe discretion own arbitrary information rising. Therefore, a lot of nation is letting you come into being until language called exile 'cyber' while anxiety is exposed about comprehensive confiscation search of the former information which an investigation agency does. Will review whether or not there is to have to set up confiscation search ambit of electronic information at this respect how.

Liabilities of Air Carrier Who Sponsored Financially Troubled Affiliate Shipping Company (항공사(航空社)의 부실 계열 해운사(海運社) 지원에 따른 법적 책임문제)

  • Choi, June-Sun
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.32 no.1
    • /
    • pp.177-200
    • /
    • 2017
  • This writer have thus far reviewed the civil and criminal obligations of the directors of a parent company that sponsored financially troubled affiliates. What was discussed here applies to logistics companies in the same manner. Hanjin Shipping cannot expect its parent company, Korean Air to prop it up financially. If such financial aid is offered without any collateral, under Korean criminal law, the directors of the parent company bears the burden of civil and criminal responsibility. One way to get around this is to secure fairness in terms of the process and the content of aid. Fairness in terms of process refers to the board of directors making public all information and approving such aid. Fairness in terms of content refers to impartial transactions that block out any possibilities of the chairman of the corporate group acting in his private interest. In the case of Korean Air bailing out Hanjin, the meeting of board of directors were held five times and a thorough review was conducted on the risks involved in the loans being repaid or not. After the review, measures to guard against undesirable scenarios were established before finally deciding on bailing out Hanjin. As such, there are no issues. In terms of the fairness of content, too, there were practically no room for the majority shareholder or controlling shareholder to pocket profits at the expense of the company. This is because the continued aid offered to a financially troubled company (i.e. Hanjin Shipping) was a posing a burden to even the controlling shareholder. This writer argues that the concept of the interest of the entire corporate group needs to be recognized. That is, it must be recognized that the relationship of control and being controlled between parent company and affiliate company, or between affiliate companies serves a practical benefit to the ongoing concern and growth of the group and is therefore just. Moreover, the corporate group and its affiliates, as well as their directors and management must recognize that they have an obligation to prioritize the interests of the corporate group ahead of the interests of the company that they are directly associated with. As such, even if Korean Air offered a loan to Hanjin Shipping without collateral, the act cannot be treated as an offense to law, nor can the directors be accused of damages that they bear the responsibility of compensating under civil law.

  • PDF

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Application and Expansion of the Harm Principle to the Restrictions of Liberty in the COVID-19 Public Health Crisis: Focusing on the Revised Bill of the March 2020 「Infectious Disease Control and Prevention Act」 (코로나19 공중보건 위기 상황에서의 자유권 제한에 대한 '해악의 원리'의 적용과 확장 - 2020년 3월 개정 「감염병의 예방 및 관리에 관한 법률」을 중심으로 -)

  • You, Kihoon;Kim, Dokyun;Kim, Ock-Joo
    • The Korean Society of Law and Medicine
    • /
    • v.21 no.2
    • /
    • pp.105-162
    • /
    • 2020
  • In the pandemic of infectious disease, restrictions of individual liberty have been justified in the name of public health and public interest. In March 2020, the National Assembly of the Republic of Korea passed the revised bill of the 「Infectious Disease Control and Prevention Act.」 The revised bill newly established the legal basis for forced testing and disclosure of the information of confirmed cases, and also raised the penalties for violation of self-isolation and treatment refusal. This paper examines whether and how these individual liberty limiting clauses be justified, and if so on what ethical and philosophical grounds. The authors propose the theories of the philosophy of law related to the justifiability of liberty-limiting measures by the state and conceptualized the dual-aspect of applying the liberty-limiting principle to the infected patient. In COVID-19 pandemic crisis, the infected person became the 'Patient as Victim and Vector (PVV)' that posits itself on the overlapping area of 'harm to self' and 'harm to others.' In order to apply the liberty-limiting principle proposed by Joel Feinberg to a pandemic with uncertainties, it is necessary to extend the harm principle from 'harm' to 'risk'. Under the crisis with many uncertainties like COVID-19 pandemic, this shift from 'harm' to 'risk' justifies the state's preemptive limitation on individual liberty based on the precautionary principle. This, at the same time, raises concerns of overcriminalization, i.e., too much limitation of individual liberty without sufficient grounds. In this article, we aim to propose principles regarding how to balance between the precautionary principle for preemptive restrictions of liberty and the concerns of overcriminalization. Public health crisis such as the COVID-19 pandemic requires a population approach where the 'population' rather than an 'individual' works as a unit of analysis. We propose the second expansion of the harm principle to be applied to 'population' in order to deal with the public interest and public health. The new concept 'risk to population,' derived from the two arguments stated above, should be introduced to explain the public health crisis like COVID-19 pandemic. We theorize 'the extended harm principle' to include the 'risk to population' as a third liberty-limiting principle following 'harm to others' and 'harm to self.' Lastly, we examine whether the restriction of liberty of the revised 「Infectious Disease Control and Prevention Act」 can be justified under the extended harm principle. First, we conclude that forced isolation of the infected patient could be justified in a pandemic situation by satisfying the 'risk to the population.' Secondly, the forced examination of COVID-19 does not violate the extended harm principle either, based on the high infectivity of asymptomatic infected people to others. Thirdly, however, the provision of forced treatment can not be justified, not only under the traditional harm principle but also under the extended harm principle. Therefore it is necessary to include additional clauses in the provision in order to justify the punishment of treatment refusal even in a pandemic.

Development of the Accident Prediction Model for Enlisted Men through an Integrated Approach to Datamining and Textmining (데이터 마이닝과 텍스트 마이닝의 통합적 접근을 통한 병사 사고예측 모델 개발)

  • Yoon, Seungjin;Kim, Suhwan;Shin, Kyungshik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.1-17
    • /
    • 2015
  • In this paper, we report what we have observed with regards to a prediction model for the military based on enlisted men's internal(cumulative records) and external data(SNS data). This work is significant in the military's efforts to supervise them. In spite of their effort, many commanders have failed to prevent accidents by their subordinates. One of the important duties of officers' work is to take care of their subordinates in prevention unexpected accidents. However, it is hard to prevent accidents so we must attempt to determine a proper method. Our motivation for presenting this paper is to mate it possible to predict accidents using enlisted men's internal and external data. The biggest issue facing the military is the occurrence of accidents by enlisted men related to maladjustment and the relaxation of military discipline. The core method of preventing accidents by soldiers is to identify problems and manage them quickly. Commanders predict accidents by interviewing their soldiers and observing their surroundings. It requires considerable time and effort and results in a significant difference depending on the capabilities of the commanders. In this paper, we seek to predict accidents with objective data which can easily be obtained. Recently, records of enlisted men as well as SNS communication between commanders and soldiers, make it possible to predict and prevent accidents. This paper concerns the application of data mining to identify their interests, predict accidents and make use of internal and external data (SNS). We propose both a topic analysis and decision tree method. The study is conducted in two steps. First, topic analysis is conducted through the SNS of enlisted men. Second, the decision tree method is used to analyze the internal data with the results of the first analysis. The dependent variable for these analysis is the presence of any accidents. In order to analyze their SNS, we require tools such as text mining and topic analysis. We used SAS Enterprise Miner 12.1, which provides a text miner module. Our approach for finding their interests is composed of three main phases; collecting, topic analysis, and converting topic analysis results into points for using independent variables. In the first phase, we collect enlisted men's SNS data by commender's ID. After gathering unstructured SNS data, the topic analysis phase extracts issues from them. For simplicity, 5 topics(vacation, friends, stress, training, and sports) are extracted from 20,000 articles. In the third phase, using these 5 topics, we quantify them as personal points. After quantifying their topic, we include these results in independent variables which are composed of 15 internal data sets. Then, we make two decision trees. The first tree is composed of their internal data only. The second tree is composed of their external data(SNS) as well as their internal data. After that, we compare the results of misclassification from SAS E-miner. The first model's misclassification is 12.1%. On the other hand, second model's misclassification is 7.8%. This method predicts accidents with an accuracy of approximately 92%. The gap of the two models is 4.3%. Finally, we test if the difference between them is meaningful or not, using the McNemar test. The result of test is considered relevant.(p-value : 0.0003) This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of enlisted men's data. Additionally, various independent variables used in the decision tree model are used as categorical variables instead of continuous variables. So it suffers a loss of information. In spite of extensive efforts to provide prediction models for the military, commanders' predictions are accurate only when they have sufficient data about their subordinates. Our proposed methodology can provide support to decision-making in the military. This study is expected to contribute to the prevention of accidents in the military based on scientific analysis of enlisted men and proper management of them.

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

Color Analyses on Digital Photos Using Machine Learning and KSCA - Focusing on Korean Natural Daytime/nighttime Scenery - (머신러닝과 KSCA를 활용한 디지털 사진의 색 분석 -한국 자연 풍경 낮과 밤 사진을 중심으로-)

  • Gwon, Huieun;KOO, Ja Joon
    • Trans-
    • /
    • v.12
    • /
    • pp.51-79
    • /
    • 2022
  • This study investigates the methods for deriving colors which can serve as a reference to users such as designers and or contents creators who search for online images from the web portal sites using specific words for color planning and more. Two experiments were conducted in order to accomplish this. Digital scenery photos within the geographic scope of Korea were downloaded from web portal sites, and those photos were studied to find out what colors were used to describe daytime and nighttime. Machine learning was used as the study methodology to classify colors in daytime and nighttime, and KSCA was used to derive the color frequency of daytime and nighttime photos and to compare and analyze the two results. The results of classifying the colors of daytime and nighttime photos using machine learning show that, when classifying the colors by 51~100%, the area of daytime colors was approximately 2.45 times greater than that of nighttime colors. The colors of the daytime class were distributed by brightness with white as its center, while that of the nighttime class was distributed with black as its center. Colors that accounted for over 70% of the daytime class were 647, those over 70% of the nighttime class were 252, and the rest (31-69%) were 101. The number of colors in the middle area was low, while other colors were classified relatively clearly into day and night. The resulting color distributions in the daytime and nighttime classes were able to provide the borderline color values of the two classes that are classified by brightness. As a result of analyzing the frequency of digital photos using KSCA, colors around yellow were expressed in generally bright daytime photos, while colors around blue value were expressed in dark night photos. For frequency of daytime photos, colors on the upper 40% had low chroma, almost being achromatic. Also, colors that are close to white and black showed the highest frequency, indicating a large difference in brightness. Meanwhile, for colors with frequency from top 5 to 10, yellow green was expressed darkly, and navy blue was expressed brightly, partially composing a complex harmony. When examining the color band, various colors, brightness, and chroma including light blue, achromatic colors, and warm colors were shown, failing to compose a generally harmonious arrangement of colors. For the frequency of nighttime photos, colors in approximately the upper 50% are dark colors with a brightness value of 2 (Munsell signal). In comparison, the brightness of middle frequency (50-80%) is relatively higher (brightness values of 3-4), and the brightness difference of various colors was large in the lower 20%. Colors that are not cool colors could be found intermittently in the lower 8% of frequency. When examining the color band, there was a general harmonious arrangement of colors centered on navy blue. As the results of conducting the experiment using two methods in this study, machine learning could classify colors into two or more classes, and could evaluate how close an image was with certain colors to a certain class. This method cannot be used if an image cannot be classified into a certain class. The result of such color distribution would serve as a reference when determining how close a certain color is to one of the two classes when the color is used as a dominant color in the base or background color of a certain design. Also, when dividing the analyzed images into several classes, even colors that have not been used in the analyzed image can be determined to find out how close they are to a certain class according to the color distribution properties of each class. Nevertheless, the results cannot be used to find out whether a specific color was used in the class and by how much it was used. To investigate such an issue, frequency analysis was conducted using KSCA. The color frequency could be measured within the range of images used in the experiment. The resulting values of color distribution and frequency from this study would serve as references for color planning of digital design regarding natural scenery in the geographic scope of Korea. Also, the two experiments are meaningful attempts for searching the methods for deriving colors that can be a useful reference among numerous images for content creator users of the relevant field.

The Effect of Pleural Thickening on the Impairment of Pulmonary Function in Asbestos Exposed Workers (석면취급 근로자에서 늑막비후가 폐기능에 미치는 영향)

  • Kim, Jee-Won;Ahn, Hyeong-Sook;Kim, Kyung-Ah;Lim, Young;Yun, Im-Goung
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.6
    • /
    • pp.923-933
    • /
    • 1995
  • Background: Pleural abnormality is the the most common respiratory change caused by asbestos dust inhalation and also develop other asbestos related disease after cessation of asbestos exposure. So we conducted epidemiologic study to investigate if the pleural abnormality is associated with pulmonary function change and what factors are influenced on pulmonary function impairment. Methods: Two hundred and twenty two asbestos workers from 9 industries using asbestos in Korea were selected to measure the concentration of sectional asbestos fiber. Ouestionnaire, chest X-ray, PFT were also performed. All the data were analyzed by student t-test and chi-square test using SAS. Regressional analysis was performed to evaluate important factors, for example smoking, exposure concentration, period and the existence of pleural thickening, affecting to the change of pulmonary function. Results: 1) All nine industries except two, airborn asbestos fiber concentration was less than an average permissible concentration. PFT was performed on 222 workers and the percentage of male was 88.3%, their mean age was $41{\pm}9$ years old, and the duration of asbestos exposure was $10.6{\pm}7.8$ yrs. 2) The chest X-ray showed normal(89.19%), pulmonary Tb(inactive)(2.7%), pleral thickening (7.66%), suspected reticulonodular shadow(0.9%). 3) The mean values of height, smoking status, concentration of asbestos fiberwere not different between the subjects with pleural thickening and others, but age, cumulative pack-years, the duration of asbestos exposure were higher in subjects with pleural thickening. 4) All the PFT indices were lower in the subjects with pleural thickening than in the subjects without pleural thickening. 5) Simple regression analysis showed there was a significant correlation between $FEF_{75}$ which is sensitive in small airway obstruction and cumulative smoking pack-years, the duration of asbestos exposure and the concentration of asbestos fiber. 6) Multiple regression analysis showed all the pulmonary function indices were decreased as the increase of cumulative smoking pack-years and especially in the indices those are sensitive in small airway obstruction. Pleural thickening was associated with reduction in FVC, $FEV_1$, PEFR and $FEF_{25}$. Conclusion: The more concentration of asbestos fiber and the more duration of asbestos exposure, the greater reduction in $FEF_{50}$, $FEF_{75}$. Therefore PFT was important in the evaluation of early detection for small airway obstruction. Furthermore pleural thickening without asbesto-related parenchymal lung disease is associated with reduction in pulmonary function.

  • PDF