• Title/Summary/Keyword: IT Technique

Search Result 23,059, Processing Time 0.058 seconds

Predicting the Direction of the Stock Index by Using a Domain-Specific Sentiment Dictionary (주가지수 방향성 예측을 위한 주제지향 감성사전 구축 방안)

  • Yu, Eunji;Kim, Yoosin;Kim, Namgyu;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.95-110
    • /
    • 2013
  • Recently, the amount of unstructured data being generated through a variety of social media has been increasing rapidly, resulting in the increasing need to collect, store, search for, analyze, and visualize this data. This kind of data cannot be handled appropriately by using the traditional methodologies usually used for analyzing structured data because of its vast volume and unstructured nature. In this situation, many attempts are being made to analyze unstructured data such as text files and log files through various commercial or noncommercial analytical tools. Among the various contemporary issues dealt with in the literature of unstructured text data analysis, the concepts and techniques of opinion mining have been attracting much attention from pioneer researchers and business practitioners. Opinion mining or sentiment analysis refers to a series of processes that analyze participants' opinions, sentiments, evaluations, attitudes, and emotions about selected products, services, organizations, social issues, and so on. In other words, many attempts based on various opinion mining techniques are being made to resolve complicated issues that could not have otherwise been solved by existing traditional approaches. One of the most representative attempts using the opinion mining technique may be the recent research that proposed an intelligent model for predicting the direction of the stock index. This model works mainly on the basis of opinions extracted from an overwhelming number of economic news repots. News content published on various media is obviously a traditional example of unstructured text data. Every day, a large volume of new content is created, digitalized, and subsequently distributed to us via online or offline channels. Many studies have revealed that we make better decisions on political, economic, and social issues by analyzing news and other related information. In this sense, we expect to predict the fluctuation of stock markets partly by analyzing the relationship between economic news reports and the pattern of stock prices. So far, in the literature on opinion mining, most studies including ours have utilized a sentiment dictionary to elicit sentiment polarity or sentiment value from a large number of documents. A sentiment dictionary consists of pairs of selected words and their sentiment values. Sentiment classifiers refer to the dictionary to formulate the sentiment polarity of words, sentences in a document, and the whole document. However, most traditional approaches have common limitations in that they do not consider the flexibility of sentiment polarity, that is, the sentiment polarity or sentiment value of a word is fixed and cannot be changed in a traditional sentiment dictionary. In the real world, however, the sentiment polarity of a word can vary depending on the time, situation, and purpose of the analysis. It can also be contradictory in nature. The flexibility of sentiment polarity motivated us to conduct this study. In this paper, we have stated that sentiment polarity should be assigned, not merely on the basis of the inherent meaning of a word but on the basis of its ad hoc meaning within a particular context. To implement our idea, we presented an intelligent investment decision-support model based on opinion mining that performs the scrapping and parsing of massive volumes of economic news on the web, tags sentiment words, classifies sentiment polarity of the news, and finally predicts the direction of the next day's stock index. In addition, we applied a domain-specific sentiment dictionary instead of a general purpose one to classify each piece of news as either positive or negative. For the purpose of performance evaluation, we performed intensive experiments and investigated the prediction accuracy of our model. For the experiments to predict the direction of the stock index, we gathered and analyzed 1,072 articles about stock markets published by "M" and "E" media between July 2011 and September 2011.

Evaluating efficiency of application the skin flash for left breast IMRT. (왼쪽 유방암 세기변조방사선 치료시 Skin Flash 적용에 대한 유용성 평가)

  • Lim, Kyoung Dal;Seo, Seok Jin;Lee, Je Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.30 no.1_2
    • /
    • pp.49-63
    • /
    • 2018
  • Purpose : The purpose of this study is investigating the changes of treatment plan and comparing skin dose with or without the skin flash. To investigate optimal applications of the skin flash, the changes of skin dose of each plans by various thicknesses of skin flash were measured and analyzed also. Methods and Material : Anthropomorphic phantom was scanned by CT for this study. The 2 fields hybrid IMRT and the 6 fields static IMRT were generated from the Eclipse (ver. 13.7.16, Varian, USA) RTP system. Additional plans were generated from each IMRT plans by changing skin flash thickness to 0.5 cm, 1.0 cm, 1.5 cm, 2.0 cm and 2.5 cm. MU and maximum doses were measured also. The treatment equipment was 6MV of VitalBeam (Varian Medical System, USA). Measuring device was a metal oxide semiconductor field-effect transistor(MOSFET). Measuring points of skin doses are upper (1), middle (2) and lower (3) positions from center of the left breast of the phantom. Other points of skin doses, artificially moved to medial and lateral sides by 0.5 cm, were also measured. Results : The reference value of 2F-hIMRT was 206.7 cGy at 1, 186.7 cGy at 2, and 222 cGy at 3, and reference values of 6F-sIMRT were measured at 192 cGy at 1, 213 cGy at 2, and 215 cGy at 3. In comparison with these reference values, the first measurement point in 2F-hIMRT was 261.3 cGy with a skin flash 2.0 cm and 2.5 cm, and the highest dose difference was 26.1 %diff. and 5.6 %diff, respectively. The third measurement point was 245.3 cGy and 10.5 %diff at the skin flash 2.5 cm. In the 6F-sIMRT, the highest dose difference was observed at 216.3 cGy and 12.7 %diff. when applying the skin flash 2.0 cm for the first measurement point and the dose difference was the largest at the application point of 2.0 cm, not the skin flash 2.5 cm for each measurement point. In cases of medial 0.5 cm shift points of 2F-hIMRT and 6F-sIMRT without skin flash, the measured value was -75.2 %diff. and -70.1 %diff. at 2F, At -14.8, -12.5, and -21.0 %diff. at the 1st, 2nd and 3rd measurement points, respectively. Generally, both treatment plans showed an increase in total MU, maximum dose and %diff as skin flash thickness increased, except for some results. The difference of skin dose using 0.5 cm thickness of skin flash was lowest lesser than 20 % in every conditions. Conclusion : Minimizing the thickness of skin flash by 0.5 cm is considered most ideal because it makes it possible to keep down MUs and lowering maximum doses. In addition, It was found that MUs, maximum doses and differences of skin doses did not increase infinitely as skin flash thickness increase by. If the error margin caused by PTV or other factors is lesser than 1.0 cm, It is considered that there will be many advantages in with the skin flash technique comparing without it.

  • PDF

Location and Construction Characteristics of Imdaejeong Wonlim based on Documentation (기문(記文)을 중심으로 고찰한 임대정원림(臨對亭園林)의 입지 및 조영 특성)

  • Rho, Jae-Hyun;Park, Tae-Hee;Shin, Sang-Sup;Kim, Hyoun-Wuk
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.29 no.4
    • /
    • pp.14-26
    • /
    • 2011
  • Imdaejeong Wonlim is located on the verge of Sangsa Village in Sapyeong-ri, Daepyeong-myeon, Hwasun-gun Gyeongsangnam-do toward Northwest. It was planned by Sa-ae, Minjuhyeon in 1862 on the basis of Gobanwon built by Nam Eongi in 16th century against the backdrop of Mt. Bongjeong and facing Sapyeong Stream. As water flows from west to east in the shape of crane, this area is a propitious site standing for prosperity and happiness. This area shows a distinct feature of Wonlim surrounding the Imdaejeong with multi layers as consisting of 5 districts - front yard where landmark stone with engraved letters of 'Janggujiso of Master Sa-ea' and junipers are harmoniously arranged, internal garden of upper pavilion ranging from a pavilion to square pond with a little island in the middle, Sugyeongwon of under pavilionu consisting of 2 ponds with a painting of three taoist hermits, forest of Mt. Bonggeong and external garden including Sapyeong Stream and farmland. According to documentation and the results of on-site investigation, it is certainly proved that Imdaejeong Wonlim was motivated by Byeoseo Wonlim which realized the idea of 'going back to hometown after resignation' following the motives of Janggujiso, a hideout aimed to accomplish the ideology, 'training mind and fostering innate nature,' on the peaceful site surrounded by water and mountain, as well as motives of Sesimcheo(洗心處) to be unified with morality of Mother Nature, etc. In addition, it implies various imaginary landscapes such as Pihangji, Eupcheongdang, square pond with an island and painting of three Taoist hermits based on a notion that 'the further scent flies away, the fresher it becomes,' which is originated from Aelyeonseol(愛蓮說). In terms of technique of natural landscape treatment, divers techniques are found in Imdaejeong Wonlim such as distant view of Mt. Bongjeong, pulling view with an intention of transparent beauty of moonlight, circle view of natural and cultural sceneries on every side, borrowed scenary of pastoral rural life adopted as an opposite view, looked view of Sulyundaero, over looked view of pond, static view in pavilion and paths, close view of water space such as stream and pond, mushroom-and-umbrella like view of Imdaejeong, vista of pond surrounded by willows, imaginary view of engraved letters meaning 'widen knowledge by studying objectives' and selected view to comprise sunrise and sunset at the same time. In the beginning of construction, various plants seemed to be planted, albeit different from now, such as Ginkgo biloba, Phyllostachys spp., Salix spp., Pinus densiflora, Abies holophylla, Morus bombycis, Juglans mandschurica, Paulownia coreana, Prunus mume, Nelumbo nucifera, etc. Generally, it reflected dignity of Confucianism or beared aspect of semantic landscape implying Taoist taste and idea of Phoenix wishing a prosperity in the future. Furthermore, a diversity of planting methods were pursued for such as liner planting for the periphery of pond, bosquet planting and circle planting adopted around the pavilion, spot planting using green trees, solitary planting of monumentally planted Paulownia coreana and opposite planting presenting the Abies holophylla into yin and yang.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

Clinical Experience of Three Dimensional Conformal Radiation Therapy for Non-Small Cell Lung Cancer (비소세포성 폐암에서 3차원 입체조형 방사선 치료 성적)

  • Choi Eun Kyung;Lee Byong Yong;Kang One Chul;Nho Young Ju;Chung Weon Kuu;Ahn Seung Do;Kim Jong Hoon;Chang Hyesook
    • Radiation Oncology Journal
    • /
    • v.16 no.3
    • /
    • pp.265-274
    • /
    • 1998
  • Purpose : This prospective study has been conducted to assess the value of three dimensional conformal radiation therapy (3DCRT) for lung cancer and to determine its potential advantage over current treatment approaches. Specific aims of this study were to 1) find the most ideal 3DCRT technique 2) establish the maximum tolerance dose that can be delivered with 3DCRT and 3) identify patients at risk for development of radiation pneumonitis. Materials and Methods : Beginning in Nov. 1994, 95 patients with inoperable non-small cell lung cancer (stage I; 4, stage II; 1, stage IIIa; 14, stage IIIb; 76) were entered onto this 3D conformal trial Areas of known disease and elective nodal areas were initially treated to 45 Gy and then using 3DCRT technique 65 to 70 Gy of total dose were delivered to the gross disease. Sixty nine patients received 65 Gy of total dose and 26 received 70 Gy Seventy eight patients (82.1$\%$) also received concurrent MVP chemotherapy. 3DCRT plans were compared with 2D plans to assess the adequacy of dose delivery to target volume, dose volume histograms for normal tissue, and normal tissue complication Probabilities (NTCP). Results : Most of plans (78/95) were composed of non-coplanar multiple (4-8) fields. Coplanar segmented conformal therapy was used in 17 pateints, choosing the proper gantry angle which minimize normal lung exposure in each segment. 3DCRT gave the full dose to nearly 100$\%$ of the gross disease target volume in all patients. The mean NTCP for ipsilateral lung with 3DCRT (range; 0.17-0.43) was 68$\%$ of the mean NTCP with 2D treatment planning (range; 0.27-0.66). DVH analysis for heart showed that irradiated volume of heart could be significantly reduced by non-coplanar 3D approach especially in the case of left lower lobe lesion. Of 95 patients evaluable for response, 75 (79$\%$), showed major response including 25 (26$\%$) with complete responses and 50 (53$\%$) with partial responses. One and two rear overall survivals of stage III patients were 62.6$\%$ and 35.2$\%$ respectively. Twenty percent (19/95) of patients had pneumonitis; Eight patients had grade 1 pneumonitis and 11 other patients had grade 2. Comparison of the average of NTCP for lung showed a significant difference between patients with and without radiation pneumonitis. Average NTCP for Patients without complication was 62$\%$ of those with complications. Conclusions : This study showed that non-coplanar multiple fields (4-8) may be one of the ideal plans for 3DCRT for lung cancer. It also suggested that 3DCRT may provide superior delivery of high dose radiation with reduced risk to normal tissue and that NTCP can be used as a guideline for the dose escalation.

  • PDF

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Geochemical Equilibria and Kinetics of the Formation of Brown-Colored Suspended/Precipitated Matter in Groundwater: Suggestion to Proper Pumping and Turbidity Treatment Methods (지하수내 갈색 부유/침전 물질의 생성 반응에 관한 평형 및 반응속도론적 연구: 적정 양수 기법 및 탁도 제거 방안에 대한 제안)

  • 채기탁;윤성택;염승준;김남진;민중혁
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.7 no.3
    • /
    • pp.103-115
    • /
    • 2000
  • The formation of brown-colored precipitates is one of the serious problems frequently encountered in the development and supply of groundwater in Korea, because by it the water exceeds the drinking water standard in terms of color. taste. turbidity and dissolved iron concentration and of often results in scaling problem within the water supplying system. In groundwaters from the Pajoo area, brown precipitates are typically formed in a few hours after pumping-out. In this paper we examine the process of the brown precipitates' formation using the equilibrium thermodynamic and kinetic approaches, in order to understand the origin and geochemical pathway of the generation of turbidity in groundwater. The results of this study are used to suggest not only the proper pumping technique to minimize the formation of precipitates but also the optimal design of water treatment methods to improve the water quality. The bed-rock groundwater in the Pajoo area belongs to the Ca-$HCO_3$type that was evolved through water/rock (gneiss) interaction. Based on SEM-EDS and XRD analyses, the precipitates are identified as an amorphous, Fe-bearing oxides or hydroxides. By the use of multi-step filtration with pore sizes of 6, 4, 1, 0.45 and 0.2 $\mu\textrm{m}$, the precipitates mostly fall in the colloidal size (1 to 0.45 $\mu\textrm{m}$) but are concentrated (about 81%) in the range of 1 to 6 $\mu\textrm{m}$in teams of mass (weight) distribution. Large amounts of dissolved iron were possibly originated from dissolution of clinochlore in cataclasite which contains high amounts of Fe (up to 3 wt.%). The calculation of saturation index (using a computer code PHREEQC), as well as the examination of pH-Eh stability relations, also indicate that the final precipitates are Fe-oxy-hydroxide that is formed by the change of water chemistry (mainly, oxidation) due to the exposure to oxygen during the pumping-out of Fe(II)-bearing, reduced groundwater. After pumping-out, the groundwater shows the progressive decreases of pH, DO and alkalinity with elapsed time. However, turbidity increases and then decreases with time. The decrease of dissolved Fe concentration as a function of elapsed time after pumping-out is expressed as a regression equation Fe(II)=10.l exp(-0.0009t). The oxidation reaction due to the influx of free oxygen during the pumping and storage of groundwater results in the formation of brown precipitates, which is dependent on time, $Po_2$and pH. In order to obtain drinkable water quality, therefore, the precipitates should be removed by filtering after the stepwise storage and aeration in tanks with sufficient volume for sufficient time. Particle size distribution data also suggest that step-wise filtration would be cost-effective. To minimize the scaling within wells, the continued (if possible) pumping within the optimum pumping rate is recommended because this technique will be most effective for minimizing the mixing between deep Fe(II)-rich water and shallow $O_2$-rich water. The simultaneous pumping of shallow $O_2$-rich water in different wells is also recommended.

  • PDF

A Structural Relationship among Job Requirements, Job Resources and Job Burnout, and Organizational Effectiveness of Private Security Guards (민간경비원의 직무요구 직무자원과 소진, 조직유효성의 구조적 관계)

  • Kim, Sung-Cheol;Kim, Young-Hyun
    • Korean Security Journal
    • /
    • no.48
    • /
    • pp.9-33
    • /
    • 2016
  • The purpose of the present study was to find out cause-and-effect relationship between job requirements and job resources, with job burnout as a mediator variable, and the effects of these variables on organizational effectiveness. The population in the present study was private security guards employed by 13 private security companies in Seoul and Gyeonggi-do areas, and a survey was conducted on 500 security guards selected using purposive sampling technique. Out of 460 questionnaires distributed, 429 responses, excluding 31 outliers or insincere responses, were used for data analysis. For analysis, data were coded and entered into SPSS 18.0 and AMOS 18.0, which were used to analyze the data. Descriptive analyses were performed to find out sociodemographic characteristics of the respondents. The exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were used to test the validity of the measurement tool, and the Cronbach's Alpha coefficients were calculated to test the reliability. To find out the significance of relationships among variables, Pearson's correlation analysis was performed. Covariance Structure Analysis (CSA) was performed to test the relationship among latent factors of a model for job requirements, job resources, job burnout, and organizational effectiveness of the private security guards, and the fitness of the model analyzed with CSA was determined by the goodness-of-fit index ($x^2$, df, p, RMR, GFI, CFI, TLI, RMSEA). The level of significance was set at .05, and the following results were obtained. First, even though the effect of job requirements on job burnout was not statistically significant, it had a positive influence overall, and this result can be considered such that the higher the perception of job requirements by the member of the organization, the higher the perception of job burnout. Second, the influence of job resources on job burnout was negative, which can be considered that the higher the perception of job resources, the lower the perception of job burnout. Third, even though the influence of job requirements on organizational effectiveness was statistically nonsignificant, it had a negative influence overall, and this result can be considered that the higher the perception of job requirements, the lower the perception of organizational effectiveness. Fourth, job resources had a positive influence on organizational effectiveness, and it can be considered that the higher the perception of job resources, the higher the perception of organizational effectiveness. Fifth, the results of the analysis between job burnout and organizational effectiveness revealed that, even though the influence of job burnout on organizational effectiveness was statistically nonsignificant, it had partial negative influences on sublevels of organizational effectiveness, and this may suggest that the higher the perception of job burnout by the organization members, the lower the organizational effectiveness. Sixth, the analysis of mediating role in the relationship between job requirements and organizational effectiveness, job burnout was taking partial mediating role between job requirements and organizational effectiveness. These results suggest that reducing job burnout by managing job requirements, organizational effectiveness that leads to job satisfaction, organizational commitment, and turnover intention can be maximized. Seventh, the analysis of mediating role in the relationship among job requirements, job resources, and organizational effectiveness, job burnout was assuming a partial mediating role in the relationships among job requirements, job resources, and organizational effectiveness. These results suggest that organizational effectiveness can be maximized by either lowering job requirements or burnout management through reorganizing job resources.

  • PDF

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

A Study on the Application of Outlier Analysis for Fraud Detection: Focused on Transactions of Auction Exception Agricultural Products (부정 탐지를 위한 이상치 분석 활용방안 연구 : 농수산 상장예외품목 거래를 대상으로)

  • Kim, Dongsung;Kim, Kitae;Kim, Jongwoo;Park, Steve
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.93-108
    • /
    • 2014
  • To support business decision making, interests and efforts to analyze and use transaction data in different perspectives are increasing. Such efforts are not only limited to customer management or marketing, but also used for monitoring and detecting fraud transactions. Fraud transactions are evolving into various patterns by taking advantage of information technology. To reflect the evolution of fraud transactions, there are many efforts on fraud detection methods and advanced application systems in order to improve the accuracy and ease of fraud detection. As a case of fraud detection, this study aims to provide effective fraud detection methods for auction exception agricultural products in the largest Korean agricultural wholesale market. Auction exception products policy exists to complement auction-based trades in agricultural wholesale market. That is, most trades on agricultural products are performed by auction; however, specific products are assigned as auction exception products when total volumes of products are relatively small, the number of wholesalers is small, or there are difficulties for wholesalers to purchase the products. However, auction exception products policy makes several problems on fairness and transparency of transaction, which requires help of fraud detection. In this study, to generate fraud detection rules, real huge agricultural products trade transaction data from 2008 to 2010 in the market are analyzed, which increase more than 1 million transactions and 1 billion US dollar in transaction volume. Agricultural transaction data has unique characteristics such as frequent changes in supply volumes and turbulent time-dependent changes in price. Since this was the first trial to identify fraud transactions in this domain, there was no training data set for supervised learning. So, fraud detection rules are generated using outlier detection approach. We assume that outlier transactions have more possibility of fraud transactions than normal transactions. The outlier transactions are identified to compare daily average unit price, weekly average unit price, and quarterly average unit price of product items. Also quarterly averages unit price of product items of the specific wholesalers are used to identify outlier transactions. The reliability of generated fraud detection rules are confirmed by domain experts. To determine whether a transaction is fraudulent or not, normal distribution and normalized Z-value concept are applied. That is, a unit price of a transaction is transformed to Z-value to calculate the occurrence probability when we approximate the distribution of unit prices to normal distribution. The modified Z-value of the unit price in the transaction is used rather than using the original Z-value of it. The reason is that in the case of auction exception agricultural products, Z-values are influenced by outlier fraud transactions themselves because the number of wholesalers is small. The modified Z-values are called Self-Eliminated Z-scores because they are calculated excluding the unit price of the specific transaction which is subject to check whether it is fraud transaction or not. To show the usefulness of the proposed approach, a prototype of fraud transaction detection system is developed using Delphi. The system consists of five main menus and related submenus. First functionalities of the system is to import transaction databases. Next important functions are to set up fraud detection parameters. By changing fraud detection parameters, system users can control the number of potential fraud transactions. Execution functions provide fraud detection results which are found based on fraud detection parameters. The potential fraud transactions can be viewed on screen or exported as files. The study is an initial trial to identify fraud transactions in Auction Exception Agricultural Products. There are still many remained research topics of the issue. First, the scope of analysis data was limited due to the availability of data. It is necessary to include more data on transactions, wholesalers, and producers to detect fraud transactions more accurately. Next, we need to extend the scope of fraud transaction detection to fishery products. Also there are many possibilities to apply different data mining techniques for fraud detection. For example, time series approach is a potential technique to apply the problem. Even though outlier transactions are detected based on unit prices of transactions, however it is possible to derive fraud detection rules based on transaction volumes.