• Title/Summary/Keyword: Problem-finding

Search Result 1,679, Processing Time 0.033 seconds

Management performance and managers' cash compensation sensitivity (경영성과와 경영자 현금보상 민감도)

  • Shin, Sung-Wook
    • Management & Information Systems Review
    • /
    • v.32 no.1
    • /
    • pp.259-272
    • /
    • 2013
  • This Paper document that managers' cash compensation is more sensitive to negative stock return than positive stock return. Also, this paper analyse that managers' cash compensation react symmetrically to accounting earnings and losses. Since stock returns include both unrealized gains and unrealized losses, we expect managers' cash compensation to be less sensitive to stock returns when returns contain unrealized gains(positive returns) than when returns contain unrealized losses(negative returns). But accounting earnings exclude unrealized gains and include unrealized losses, so managers' cash compensation will react symmetrically to accounting earnings and losses. Analyzing 5,815 firm-year data for 2000-2011, we find that managers' cash compensation reacts asymmetrically to stock retruns whereas managers' cash compensation reacts symmetrically to accounting performance. This finding is consistent with boards of directors seeking to mitigate ex post settling up problem that would arise of managers' cash compensation was equally sensitive to positive and negative stock return.

  • PDF

Efficient Collaboration Method Between CPU and GPU for Generating All Possible Cases in Combination (조합에서 모든 경우의 수를 만들기 위한 CPU와 GPU의 효율적 협업 방법)

  • Son, Ki-Bong;Son, Min-Young;Kim, Young-Hak
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.9
    • /
    • pp.219-226
    • /
    • 2018
  • One of the systematic ways to generate the number of all cases is a combination to construct a combination tree, and its time complexity is O($2^n$). A combination tree is used for various purposes such as the graph homogeneity problem, the initial model for calculating frequent item sets, and so on. However, algorithms that must search the number of all cases of a combination are difficult to use realistically due to high time complexity. Nevertheless, as the amount of data becomes large and various studies are being carried out to utilize the data, the number of cases of searching all cases is increasing. Recently, as the GPU environment becomes popular and can be easily accessed, various attempts have been made to reduce time by parallelizing algorithms having high time complexity in a serial environment. Because the method of generating the number of all cases in combination is sequential and the size of sub-task is biased, it is not suitable for parallel implementation. The efficiency of parallel algorithms can be maximized when all threads have tasks with similar size. In this paper, we propose a method to efficiently collaborate between CPU and GPU to parallelize the problem of finding the number of all cases. In order to evaluate the performance of the proposed algorithm, we analyze the time complexity in the theoretical aspect, and compare the experimental time of the proposed algorithm with other algorithms in CPU and GPU environment. Experimental results show that the proposed CPU and GPU collaboration algorithm maintains a balance between the execution time of the CPU and GPU compared to the previous algorithms, and the execution time is improved remarkable as the number of elements increases.

Analysis of the applicability of parameter estimation methods for a transient storage model (저장대모형의 매개변수 산정을 위한 최적화 기법의 적합성 분석)

  • Noh, Hyoseob;Baek, Donghae;Seo, Il Won
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.10
    • /
    • pp.681-695
    • /
    • 2019
  • A Transient Storage Model (TSM) is one of the most widely used model accounting for complex solute transport in natural river to understanding natural river properties with four TSM key parameters. The TSM parameters are estimated via inverse modeling. Parameter estimation of the TSM is carried out by solving optimization problem about finding best fitted simulation curve with measured curve obtained from tracer test. Several studies have reported uncertainty in parameter estimation from non-convexity of the problem. In this study, we assessed best combination of optimization method and objective function for TSM parameter estimation using Cheong-mi Creek tracer test data. In order to find best optimization setting guaranteeing convergence and speed, Evolutionary Algorithm (EA) based global optimization methods, such as CCE of SCE-UA and MCCE of SP-UCI, and error based objective functions were compared, using Shuffled Complex-Self Adaptive Hybrid EvoLution (SC-SAHEL). Overall results showed that multi-EA SC-SAHEL with Percent Mean Squared Error (PMSE) objective function is the best optimization setting which is fastest and stable method in convergence.

Study for Feature Selection Based on Multi-Agent Reinforcement Learning (다중 에이전트 강화학습 기반 특징 선택에 대한 연구)

  • Kim, Miin-Woo;Bae, Jin-Hee;Wang, Bo-Hyun;Lim, Joon-Shik
    • Journal of Digital Convergence
    • /
    • v.19 no.12
    • /
    • pp.347-352
    • /
    • 2021
  • In this paper, we propose a method for finding feature subsets that are effective for classification in an input dataset by using a multi-agent reinforcement learning method. In the field of machine learning, it is crucial to find features suitable for classification. A dataset may have numerous features; while some features may be effective for classification or prediction, others may have little or rather negative effects on results. In machine learning problems, feature selection for increasing classification or prediction accuracy is a critical problem. To solve this problem, we proposed a feature selection method based on reinforced learning. Each feature has one agent, which determines whether the feature is selected. After obtaining corresponding rewards for each feature that is selected, but not by the agents, the Q-value of each agent is updated by comparing the rewards. The reward comparison of the two subsets helps agents determine whether their actions were right. These processes are performed as many times as the number of episodes, and finally, features are selected. As a result of applying this method to the Wisconsin Breast Cancer, Spambase, Musk, and Colon Cancer datasets, accuracy improvements of 0.0385, 0.0904, 0.1252 and 0.2055 were shown, respectively, and finally, classification accuracies of 0.9789, 0.9311, 0.9691 and 0.9474 were achieved, respectively. It was proved that our proposed method could properly select features that were effective for classification and increase classification accuracy.

An Analysis of the Questions Presented in Chapters of Pattern Area in Elementary School Mathematics (초등수학의 규칙성 영역 단원에 제시된 발문의 특성 분석)

  • Do, Joowon
    • Education of Primary School Mathematics
    • /
    • v.24 no.4
    • /
    • pp.189-202
    • /
    • 2021
  • The teacher's questions presented in the problem-solving situation stimulate students' mathematical thinking and lead them to find a solution to the given problem situation. In this research, the types and functions of questions presented in chapters of Pattern area of the 2015 revised elementary school mathematics textbooks were compared and analyzed by grade cluster. Through this, it was attempted to obtain implications for teaching and learning in identifying the characteristics of questions and effectively using the questions when teaching Pattern area. As a result of this research, as grade clsuter increased, the number of questions per lesson presented in Pattern area increased. Frequency of the types of questions in textbooks was found to be high in the order of reasoning questions, factual questions, and open questions in common by grade cluster. In chapters of Pattern area, relatively many questions were presented that serve as functions to help guess, invent, and solve problems or to help mathematical reasoning in the process of finding rules. It can be inferred that these types of questions and their functions are related to the learning content by grade cluster and characteristics of grade cluster. Therefore, the results of this research can contribute to providing a reference material for devising questions when teaching Pattern area and further to the development of teaching and learning in Pattern area.

Implementation of Urinalysis Service Application based on MobileNetV3 (MobileNetV3 기반 요검사 서비스 어플리케이션 구현)

  • Gi-Jo Park;Seung-Hwan Choi;Kyung-Seok Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.4
    • /
    • pp.41-46
    • /
    • 2023
  • Human urine is a process of excreting waste products in the blood, and it is easy to collect and contains various substances. Urinalysis is used to check for diseases, health conditions, and urinary tract infections. There are three methods of urinalysis: physical property test, chemical test, and microscopic test, and chemical test results can be easily confirmed using urine test strips. A variety of items can be tested on the urine test strip, through which various diseases can be identified. Recently, with the spread of smart phones, research on reading urine test strips using smart phones is being conducted. There is a method of detecting and reading the color change of a urine test strip using a smartphone. This method uses the RGB values and the color difference formula to discriminate. However, there is a problem in that accuracy is lowered due to various environmental factors. This paper applies a deep learning model to solve this problem. In particular, color discrimination of a urine test strip is improved in a smartphone using a lightweight CNN (Convolutional Neural Networks) model. CNN is a useful model for image recognition and pattern finding, and a lightweight version is also available. Through this, it is possible to operate a deep learning model on a smartphone and extract accurate urine test results. Urine test strips were taken in various environments to prepare deep learning model training images, and a urine test service application was designed using MobileNet V3.

Extraction of Landmarks Using Building Attribute Data for Pedestrian Navigation Service (보행자 내비게이션 서비스를 위한 건물 속성정보를 이용한 랜드마크 추출)

  • Kim, Jinhyeong;Kim, Jiyoung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.37 no.1
    • /
    • pp.203-215
    • /
    • 2017
  • Recently, interest in Pedestrian Navigation Service (PNS) is being increased due to the diffusion of smart phone and the improvement of location determination technology and it is efficient to use landmarks in route guidance for pedestrians due to the characteristics of pedestrians' movement and success rate of path finding. Accordingly, researches on extracting landmarks have been progressed. However, preceding researches have a limit that they only considered the difference between buildings and did not consider visual attention of maps in display of PNS. This study improves this problem by defining building attributes as local variable and global variable. Local variables reflect the saliency of buildings by representing the difference between buildings and global variables reflects the visual attention by representing the inherent characteristics of buildings. Also, this study considers the connectivity of network and solves the overlapping problem of landmark candidate groups by network voronoi diagram. To extract landmarks, we defined building attribute data based on preceding researches. Next, we selected a choice point for pedestrians in pedestrian network data, and determined landmark candidate groups at each choice point. Building attribute data were calculated in the extracted landmark candidate groups and finally landmarks were extracted by principal component analysis. We applied the proposed method to a part of Gwanak-gu, Seoul and this study evaluated the extracted landmarks by making a comparison with labels and landmarks used by portal sites such as the NAVER and the DAUM. In conclusion, 132 landmarks (60.3%) among 219 landmarks of the NAVER and the DAUM were extracted by the proposed method and we confirmed that 228 landmarks which there are not labels or landmarks in the NAVER and the DAUM were helpful to determine a change of direction in path finding of local level.

The Characteristics and Medical Utilization of Migrant Workers (외국인 노동자의 특성과 의료이용 실태)

  • Ju, Sun Me
    • Korean Journal of Occupational Health Nursing
    • /
    • v.7 no.2
    • /
    • pp.164-176
    • /
    • 1998
  • This study deals with the current medical utilization for migrant workers and the characteristics of them. The purpose of this study is to provide the basic information to establish proper medical policy. For the study self-made questionnaire was used, which was answered by 453 migrant workers working in the area of manufacturing and non-technical work in 10 cities like Seoul, Inchon, Namyangju, Sungnam, Kwangju, Pyungchon, Kunpo, Kimpo, Masuk in Kyungki-do and Chunan in Chungchungnam-do. Besides, 303 medical records of those who had visited free medical check-up center were analyzed. The period of accumulating data is 6 months, from November 1st, 1996 to April 30th, 1997. The characteristics of migrant workers and current medical utilization are analyzed by percentage and the relation between characteristics and current medical utilization were analyzed using ${\chi}^2$-test, t-test, ANOVA. The finding of this study was as follows : 1) The number of nationality was 16. The first majority was Philippians as 32.0%. Among 16 nationalities Southeastern and Northern Asians were 48.9%, Southwestern Asian was 46.5%, the rest was 7.3%. Men were 81.0%, those who are aged from 26 to 30 were 39.0%, Graduatee from high school 92.7%, Christians 56.3%, unmarried 55.4% and salary from 600,000 Won to 800,000 Won 53.8% averaging monthly payment 669,810 Won. As for their residence, those who resided over 3 years were 31.9% and the illegal residence reached 77.4%. As for Korean language, those who speak in middle level were 5.6%. 2) As for kind of work and circumstances, manufacturing was 81.1%, 4 off-days per month 72.2% and 9-10 working hours per day 42.1%. As for accommodation, residence in fabric was 62.6% and one or two members as roommate 40.2%. 3) The characteristics of health behavior showed that 89.4% of migrant workers had 3 meals, 70.9% of them did not drink alcohol, 73.5% of them did not smoke. 4) As a characteristic of health status, 71.8% of them perceived of their health. 76.1% thought that they had no illness before coming Korea. Among them who recognized their illness, those who had problem in circulatory system was 35.3%, respiratory system ENT 19.1% and nervous system 19.1%.66.2% of those having illness had already had sickness when coming to Korea. 5) During last one month, 79.2% of them were known as ones having no illness. Among the sick, those who had problem in circulatory system was 31.6%, nervous system 23.7% and respiratory system 21.1%. 60.3% of the sick were not cured at that time. 6) Sorting the symptom of those who visited free medical check up, dental care was 24.2%, orthopedic 14.0% and digestive system 13.8%. Teethache was 34.4%, stomach problem 11.6%, upper respiratory inflammation 10.2% and back pain 5.9%. Averagely they visited free medical check up 1-2 times. According to symptom, epilepsy 25.5 times, heart and vascular disease 9 times, constipation 2.8%, neurosis 2.38 times and stomach problem 2.34 times. 7) The most frequently visited medical service by migrant workers was hospital. The most mentioned reason was good healing as 36.3%. The medical service satisfied migrant workers mostly was hospital as 64.3%. The reason of satisfaction was also good healing as 45.9%. 8) 77.2% of respondents did not spend money for medical check. Average monthly medical cost was 25,100 Won, 3.7% of income. Those who had no medical security was 73.4%. In their case, 67.7% got discount from hospital or support from working place and religious organization. 9) As for the difference of medical utilization according for the characteristics of migrant workers, legal workers and no-Korean speaker used hospital more frequently. 10) Those who were satisfied most of all with the service of hospital were female workers, hinduists and buddhists, legal workers or manufacture workers. 11) Christians, those who have 3 meals or recognize themselves as healthy ones mostly had no illness. As a result, the most of migrant workers in Korea are from Asia. They are good educated but are working in manufacturing and illegal. Their average income is under 700,000 Won which in not enough for medical cost. They have no medical security and medical fee is supported by religious organization or discounted. Considering these facts the medical policy by government is to be established.

  • PDF

Recommender Systems using Structural Hole and Collaborative Filtering (구조적 공백과 협업필터링을 이용한 추천시스템)

  • Kim, Mingun;Kim, Kyoung-Jae
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.107-120
    • /
    • 2014
  • This study proposes a novel recommender system using the structural hole analysis to reflect qualitative and emotional information in recommendation process. Although collaborative filtering (CF) is known as the most popular recommendation algorithm, it has some limitations including scalability and sparsity problems. The scalability problem arises when the volume of users and items become quite large. It means that CF cannot scale up due to large computation time for finding neighbors from the user-item matrix as the number of users and items increases in real-world e-commerce sites. Sparsity is a common problem of most recommender systems due to the fact that users generally evaluate only a small portion of the whole items. In addition, the cold-start problem is the special case of the sparsity problem when users or items newly added to the system with no ratings at all. When the user's preference evaluation data is sparse, two users or items are unlikely to have common ratings, and finally, CF will predict ratings using a very limited number of similar users. Moreover, it may produces biased recommendations because similarity weights may be estimated using only a small portion of rating data. In this study, we suggest a novel limitation of the conventional CF. The limitation is that CF does not consider qualitative and emotional information about users in the recommendation process because it only utilizes user's preference scores of the user-item matrix. To address this novel limitation, this study proposes cluster-indexing CF model with the structural hole analysis for recommendations. In general, the structural hole means a location which connects two separate actors without any redundant connections in the network. The actor who occupies the structural hole can easily access to non-redundant, various and fresh information. Therefore, the actor who occupies the structural hole may be a important person in the focal network and he or she may be the representative person in the focal subgroup in the network. Thus, his or her characteristics may represent the general characteristics of the users in the focal subgroup. In this sense, we can distinguish friends and strangers of the focal user utilizing the structural hole analysis. This study uses the structural hole analysis to select structural holes in subgroups as an initial seeds for a cluster analysis. First, we gather data about users' preference ratings for items and their social network information. For gathering research data, we develop a data collection system. Then, we perform structural hole analysis and find structural holes of social network. Next, we use these structural holes as cluster centroids for the clustering algorithm. Finally, this study makes recommendations using CF within user's cluster, and compare the recommendation performances of comparative models. For implementing experiments of the proposed model, we composite the experimental results from two experiments. The first experiment is the structural hole analysis. For the first one, this study employs a software package for the analysis of social network data - UCINET version 6. The second one is for performing modified clustering, and CF using the result of the cluster analysis. We develop an experimental system using VBA (Visual Basic for Application) of Microsoft Excel 2007 for the second one. This study designs to analyzing clustering based on a novel similarity measure - Pearson correlation between user preference rating vectors for the modified clustering experiment. In addition, this study uses 'all-but-one' approach for the CF experiment. In order to validate the effectiveness of our proposed model, we apply three comparative types of CF models to the same dataset. The experimental results show that the proposed model outperforms the other comparative models. In especial, the proposed model significantly performs better than two comparative modes with the cluster analysis from the statistical significance test. However, the difference between the proposed model and the naive model does not have statistical significance.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.