• Title/Summary/Keyword: Possibility

Search Result 20,971, Processing Time 0.05 seconds

NFC-based Smartwork Service Model Design (NFC 기반의 스마트워크 서비스 모델 설계)

  • Park, Arum;Kang, Min Su;Jun, Jungho;Lee, Kyoung Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.157-175
    • /
    • 2013
  • Since Korean government announced 'Smartwork promotion strategy' in 2010, Korean firms and government organizations have started to adopt smartwork. However, the smartwork has been implemented only in a few of large enterprises and government organizations rather than SMEs (small and medium enterprises). In USA, both Yahoo! and Best Buy have stopped their flexible work because of its reported low productivity and job loafing problems. In addition, according to the literature on smartwork, we could draw obstacles of smartwork adoption and categorize them into the three types: institutional, organizational, and technological. The first category of smartwork adoption obstacles, institutional, include the difficulties of smartwork performance evaluation metrics, the lack of readiness of organizational processes, limitation of smartwork types and models, lack of employee participation in smartwork adoption procedure, high cost of building smartwork system, and insufficiency of government support. The second category, organizational, includes limitation of the organization hierarchy, wrong perception of employees and employers, a difficulty in close collaboration, low productivity with remote coworkers, insufficient understanding on remote working, and lack of training about smartwork. The third category, technological, obstacles include security concern of mobile work, lack of specialized solution, and lack of adoption and operation know-how. To overcome the current problems of smartwork in reality and the reported obstacles in literature, we suggest a novel smartwork service model based on NFC(Near Field Communication). This paper suggests NFC-based Smartwork Service Model composed of NFC-based Smartworker networking service and NFC-based Smartwork space management service. NFC-based smartworker networking service is comprised of NFC-based communication/SNS service and NFC-based recruiting/job seeking service. NFC-based communication/SNS Service Model supplements the key shortcomings that existing smartwork service model has. By connecting to existing legacy system of a company through NFC tags and systems, the low productivity and the difficulty of collaboration and attendance management can be overcome since managers can get work processing information, work time information and work space information of employees and employees can do real-time communication with coworkers and get location information of coworkers. Shortly, this service model has features such as affordable system cost, provision of location-based information, and possibility of knowledge accumulation. NFC-based recruiting/job-seeking service provides new value by linking NFC tag service and sharing economy sites. This service model has features such as easiness of service attachment and removal, efficient space-based work provision, easy search of location-based recruiting/job-seeking information, and system flexibility. This service model combines advantages of sharing economy sites with the advantages of NFC. By cooperation with sharing economy sites, the model can provide recruiters with human resource who finds not only long-term works but also short-term works. Additionally, SMEs (Small Medium-sized Enterprises) can easily find job seeker by attaching NFC tags to any spaces at which human resource with qualification may be located. In short, this service model helps efficient human resource distribution by providing location of job hunters and job applicants. NFC-based smartwork space management service can promote smartwork by linking NFC tags attached to the work space and existing smartwork system. This service has features such as low cost, provision of indoor and outdoor location information, and customized service. In particular, this model can help small company adopt smartwork system because it is light-weight system and cost-effective compared to existing smartwork system. This paper proposes the scenarios of the service models, the roles and incentives of the participants, and the comparative analysis. The superiority of NFC-based smartwork service model is shown by comparing and analyzing the new service models and the existing service models. The service model can expand scope of enterprises and organizations that adopt smartwork and expand the scope of employees that take advantages of smartwork.

A Study on the Improvement of Recommendation Accuracy by Using Category Association Rule Mining (카테고리 연관 규칙 마이닝을 활용한 추천 정확도 향상 기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.27-42
    • /
    • 2020
  • Traditional companies with offline stores were unable to secure large display space due to the problems of cost. This limitation inevitably allowed limited kinds of products to be displayed on the shelves, which resulted in consumers being deprived of the opportunity to experience various items. Taking advantage of the virtual space called the Internet, online shopping goes beyond the limits of limitations in physical space of offline shopping and is now able to display numerous products on web pages that can satisfy consumers with a variety of needs. Paradoxically, however, this can also cause consumers to experience the difficulty of comparing and evaluating too many alternatives in their purchase decision-making process. As an effort to address this side effect, various kinds of consumer's purchase decision support systems have been studied, such as keyword-based item search service and recommender systems. These systems can reduce search time for items, prevent consumer from leaving while browsing, and contribute to the seller's increased sales. Among those systems, recommender systems based on association rule mining techniques can effectively detect interrelated products from transaction data such as orders. The association between products obtained by statistical analysis provides clues to predicting how interested consumers will be in another product. However, since its algorithm is based on the number of transactions, products not sold enough so far in the early days of launch may not be included in the list of recommendations even though they are highly likely to be sold. Such missing items may not have sufficient opportunities to be exposed to consumers to record sufficient sales, and then fall into a vicious cycle of a vicious cycle of declining sales and omission in the recommendation list. This situation is an inevitable outcome in situations in which recommendations are made based on past transaction histories, rather than on determining potential future sales possibilities. This study started with the idea that reflecting the means by which this potential possibility can be identified indirectly would help to select highly recommended products. In the light of the fact that the attributes of a product affect the consumer's purchasing decisions, this study was conducted to reflect them in the recommender systems. In other words, consumers who visit a product page have shown interest in the attributes of the product and would be also interested in other products with the same attributes. On such assumption, based on these attributes, the recommender system can select recommended products that can show a higher acceptance rate. Given that a category is one of the main attributes of a product, it can be a good indicator of not only direct associations between two items but also potential associations that have yet to be revealed. Based on this idea, the study devised a recommender system that reflects not only associations between products but also categories. Through regression analysis, two kinds of associations were combined to form a model that could predict the hit rate of recommendation. To evaluate the performance of the proposed model, another regression model was also developed based only on associations between products. Comparative experiments were designed to be similar to the environment in which products are actually recommended in online shopping malls. First, the association rules for all possible combinations of antecedent and consequent items were generated from the order data. Then, hit rates for each of the associated rules were predicted from the support and confidence that are calculated by each of the models. The comparative experiments using order data collected from an online shopping mall show that the recommendation accuracy can be improved by further reflecting not only the association between products but also categories in the recommendation of related products. The proposed model showed a 2 to 3 percent improvement in hit rates compared to the existing model. From a practical point of view, it is expected to have a positive effect on improving consumers' purchasing satisfaction and increasing sellers' sales.

A Review of Personal Radiation Dose per Radiological Technologists Working at General Hospitals (전국 종합병원 방사선사의 개인피폭선량에 대한 고찰)

  • Jung, Hong-Ryang;Lim, Cheong-Hwan;Lee, Man-Koo
    • Journal of radiological science and technology
    • /
    • v.28 no.2
    • /
    • pp.137-144
    • /
    • 2005
  • To find the personal radiation dose of radiological technologists, a survey was conducted to 623 radiological technologists who had been working at 44 general hospitals in Korea's 16 cities and provinces from 1998 to 2002. A total of 2,624 cases about personal radiological dose that were collected were analyzed by region, year and hospital, the results of which look as follows : 1. The average radiation dose per capita by region and year for the 5 years was 1.61 mSv. By region, Daegu showed the highest amount 4.74 mSv, followed by Gangwon 4.65 mSv and Gyeonggi 2.15 mSv. The lowest amount was recorded in Chungbuk 0.91 mSv, Jeju 0.94 mSv and Busan 0.97 mSv in order. By year, 2000 appeared to be the year showing the highest amount of radiation dose 1.80 mSv, followed by 2002 1.77 mSv, 1999 1.55 mSv, 2001 1.50 mSv and 1998 1.36 mSv. 2. In 1998, Gangwon featured the highest amount of radiological dose per capita 3.28 mSv, followed by Gwangju 2.51 mSv and Daejeon 2.25 mSv, while Jeju 0.86mSv and Chungbuk 0.85 mSv belonged to the area where the radiation dose remained less than 1.0 mSv In 1999, Gangwon also topped the list with 5.67 mSv, followed by Daegu with 4.35 mSv and Gyeonggi with 2.48 mSv. In the same year, the radiation dose was kept below 1.0 mSv. in Ulsan 0.98 mSv, Gyeongbuk 0.95 mSv and Jeju 0.91 mSv. 3. In 2000, Gangwon was again at the top of the list with 5.73 mSv. Ulsan turned out to have less than 1.0 mSv of radiation dose in the years 1998 and 1999 consecutively, whereas the amount increased relatively high to 5.20 mSv. Chungbuk remained below the level of 1.0 mSv with 0.79 mSv. 4. In 2001, Daegu recorded the highest amount of radiation dose among those ever analyzed for 5 years with 9.05 mSv, followed by Gangwon with 4.01 mSv. The area with less than 1.0 mSv included Gyeongbuk 0.99 mSv and Jeonbuk 0.92 mSv. In 2002, Gangwon also led the list with 4.65 mSv while Incheon 0.88 mSv, Jeonbuk 0.96 mSv and Jeju 0.68 mSv belonged to the regions with less than 1.0 mSv of radiation dose. 5. By hospital, KMH in Daegu showed the record high amount of average radiation dose during the period of 5 years 6.82 mSv, followed by GAH 5.88 mSv in Gangwon and CAH 3.66 mSv in Seoul. YSH in Jeonnam 0.36 mSv comes first in the order of the hospitals with least amount of radiation dose, followed by GNH in Gyeongnam 0.39 mSv and DKH in Chungnam 0.51 mSv. There is a limit to the present study in that a focus is laid on the radiological technologists who are working at the 3rd referral hospitals which are regarded to be stable in terms of working conditions while radiological technologists who are working at small-sized hospitals are excluded from the survey. Besides, there are also cases in which hospitals with less than 5 years since establishment are included in the survey and the radiological technologists who have worked for less than 5 years at a hospital are also put to survey. We can't exclude the possibility, either, of assumption that the difference of personal average radiological dose by region, hospital and year might be ascribed to the different working conditions and facilities by medical institutions. It seems therefore desirable to develop standardized instruments to measure working environment objectively and to invent device to compare and analyze them by region and hospital more accurately in the future.

  • PDF

Studies on the Estimation of Growth Pattern Cut-up Parts in Four Broiler Strain in Growing Body Weight (육용계에 있어서 계통간 산육능력 및 체중증가에 따른 각 부위별 증가양상 추정에 관한 연구)

  • 양봉국;조병욱
    • Korean Journal of Poultry Science
    • /
    • v.17 no.3
    • /
    • pp.141-156
    • /
    • 1990
  • The experiments were conducted to investigate the possibility of improving the effectiveness of the existing method to estimate the edible meat weight in the live broiler chicken. A total of 360 birds, five male and female chicks from each line were sacrificed at Trial 1 (body weight 900-1, 000g), Trial 2 (body weight 1.200-1, 400g), Trial 3(body weight 1, 600-1, 700), and Trial 4(body weight 2, 000g) in order to measure the body weight, edible meat weight of breast, thigh and drumsticks, and various components of body weight. Each line was reared at the Poultry Breeding Farm, Seoul National University from the second of july, 1987 to the thirteenth of September, 1987. The results obtained from this study were summarized as follows : 1. The average body weights of each line( H. T, M, A) were $2150.5\pm$34.9, $2133.0\pm$26.2, $1960.0\pm$23.1, and $2319.3\pm$27.9, respectively. at 7 weeks of age. The feed to body weight eain ratio for each line chicks was 2.55, 2.13, 2.08, and 2.03, respectively, for 0 to 7 weeks of age. The viability of each line was 99.7. 99.7, 100.0, and 100.0%, respectively, for 0 to 7 weeks of age.01 was noticed that A Line chicks grow significantly heavier than did T, H, M line chic ks from 0 to 7 weeks of age. The regression coefficients of growth curves from each line chicks were bA=1.015, bH=0.265, bM=0.950 and bT=0.242, respectively. 2. Among the body weight components, the feather. abdominal fat, breast, and thigh and drumsticks increased in their weight percentage as the birds grew older, while neck. head, giblets and inedible viscera decreased. No difference wat apparent in shank, wings and hack. 3. The weight percentages of breast in edible part for each line thicks were 19.2, 19.0, 19.9 and 19.0% at Trial 4, respectively. The weight percentages of thigh and drumsticks in edible part for each line chicks were 23.1, 23.3, 22.8, and 23.0% at Trial 4. respective1y. 4. The values for the percentage meat yield from breast were 77.2. 78.9 73.5 and 74.8% at Trial 4 in H, T, M and A Line chicks. respectively. For thigh and drumstick, the values of 80.3, 78.4. 79.7 and 80.2% were obtained. These data indicate that the percentage meat yield increase as the birds grow older. 5. The correlation coefficients between body weight and blood. head, shanks. breast. thigh-drumstick were high. The degree if correlation between abdominal fat(%) and percentage of edible meat were extremely low at all times, but those between abdominal fat (%) and inedible viscera were significantly high.

  • PDF

Radiotherapy in Incompletely Resected Gastric Cancers (불완전 절제된 위암의 방사선 치료)

  • Kim Jong Hoon;Choi Eun Kyung;Cho Jung Gil;Kim Byung Sik;Oh Sung Tae;Kim Dong Kwan;Chang Hyesook
    • Radiation Oncology Journal
    • /
    • v.16 no.1
    • /
    • pp.17-25
    • /
    • 1998
  • Purpose : Although local recurrence rates of stomach cancer after radiocal surgery have been reported in the range of $30-70\%$, the role of postoperative adjuvant therapy has not been established. We report the result of radiotherapy in resected stomach cancer with positive surgical margin to elucidate the role of postoperative radiotherapy. Materials and Methods : From June 1991 to August 1996, twenty five patients with positive surgical margins after radical gastrectomy were treated with postoperative radiotherapy and chemotherapy. Median dose of radiation was 55.8Gy and the range was 44.6-59.4Gy. Second cycle of chemotherapy was delivered concurrently with radiation and total number of six cycles were delivered. Twenty three had adenocarcinoma and the other two had leiornyosarcoma. The numbers of patients with stage I B, II, III A, III B, and IV were 1, 2, 11, 10 and 1 respectively. Positive margins at distal end of the stomach were in 17 patients and proximal in 5. The other three patients had positive margin at the sites of adjacent organ invasion Minimum and median follow-up periods were 12 months and 18 months, respectively, Results : Twenty-four of 25 patients received prescribed radiation dose and RTOG grade 3 toxicity of UGI tract was observed in 3, all of which were weight loss more than $15\%$ of their pretreatment weight. But hematemesis. melena, intestinal obstruction or grade 4 toxicity were not found. Locoregional failure within the radiation field was observed in 7 patients, and distant metastasis in 10 patients. Sites of locoregional recurrences involve anastomosis/remnant stomach in 3, tumor bed/duodenal stump in 3, regional lymph node in 1 patient Peritoneal seeding occurred in 6, liver metastases months and median disease free survival time was 26 months. Stages andradiation dose were not significant prognostic factors for locoregional in 2, and distant nodes in 2 patients. Four year disease specificsurvival rate was $40\%$ and disease free survival was $48\%$. Median survival was 35 failures. Conculsion : Although all patients in this study had positive surgical margins, locoregional failure rate was $28\%$, and 4 year disease specific survival rate was $40\%$. Considering small number of patients and relatively short follow-up period, it is not certain that postoperative radiotherapy lowered locoregional recurrences. but we could find a Possibility of the role of postoperative radiotherapy in Patients with high risk factors.

  • PDF

The Monitoring on Plasticizers and Heavy Metals in Teabags (침출용 티백 포장재의 안전성에 관한 연구)

  • Eom, Mi-Ok;Kwak, In-Shin;Kang, Kil-Jin;Jeon, Dae-Hoon;Kim, Hyung-Il;Sung, Jun-Hyun;Choi, Hee-Jung;Lee, Young-Ja
    • Journal of Food Hygiene and Safety
    • /
    • v.21 no.4
    • /
    • pp.231-237
    • /
    • 2006
  • Nowadays the teabag is worldwide used for various products including green tea, tea, coffee, etc. since it is convenient for use. In case of outer packaging printed, however, there is a possibility that the plasticizers which is used for improvement in adhesiveness of printing ink may shift to inner tea bag. In this study, in order to monitor residual levels of plasticizers in teabags, we have established the simultaneous analysis method of 9 phthalates and 7 adipates plasticizers using gas chromatography (GC). These compounds were also confirmed using gas chromatography-mass spectrometry (GC-MSD). The recoveries of plasticizers analyzed by GC ranged from 82.7% to 104.6% with coefficient of variation of $0.6\sim2.7%$ and the correlation coefficients of each plasticizer was $0.9991\sim0.9999$. Therefore this simultaneous analysis method was showed excellent reproducibility and linearity. And limit of detection (LOD) and limit of quantitation (LOQ) on individual plasticizer were $0.1\sim3.5\;ppm\;and\;0.3\sim11.5\;ppm$ respectively. When 143 commercial products of teabag were monitored, no plasticizers analysed were detected in filter of teabag products. The migration into $95^{\circ}C$ water as food was also examined and the 16 plasticizers are not detected. In addition we carried out analysis of heavy metals, lead (Pb), cadmium (Cd), arsenic (As) and aluminum (Al) in teabag filters using ICP/AES. $Trace\sim23{\mu}g$ Pb per teabag and $0.6\sim1718{\mu}g$ Al per teabag were detected in materials of samples and Cd and As are detected less than LOQ (0.05 ppm). The migration levels of Pb and Al from teabag filter to $95^{\circ}C$ water were upto $11.5{\mu}g\;and\;20.8{\mu}g$ per teabag, respectively and Cd and As were not detected in exudate water of all samples. Collectively, these results suggest that there is no safety concern from using teabag filter.

Excavation of Kim Jeong-gi and Korean Archeology (창산 김정기의 유적조사와 한국고고학)

  • Lee, Ju-heun
    • Korean Journal of Heritage: History & Science
    • /
    • v.50 no.4
    • /
    • pp.4-19
    • /
    • 2017
  • Kim Jeong-gi (pen-name: Changsan, Mar. 31, 1930 - Aug. 26, 2015) made a major breakthrough in the history of cultural property excavation in Korea: In 1959, he began to develop an interest in cultural heritage after starting work as an employee of the National Museum of Korea. For about thirty years until he retired from the National Research Institute of Cultural Heritage in 1987, he devoted his life to the excavation of our country's historical relics and artifacts and compiled countless data about them. He continued striving to identify the unique value and meaning of our cultural heritage in universities and excavation organizations until he passed away in 2015. Changsan spearheaded all of Korea's monumental archeological excavations and research. He is widely known at home and abroad as a scholar of Korean archeology, particularly in the early years of its existence as an academic discipline. As such, he has had a considerable influence on the development of Korean archeology. Although his multiple activities and roles are meaningful in terms of the country's archaeological history, there are limits to his contributions nevertheless. The Deoksugung Palace period (1955-1972), when the National Museum of Korea was situated in Deoksugung Palace, is considered to be a time of great significance for Korean archeology, as relics with diverse characteristics were researched during this period. Changsan actively participated in archeological surveys of prehistoric shell mounds and dwellings, conducted surveys of historical relics, measured many historical sites, and took charge of photographing and drawing such relics. He put to good use all the excavation techniques that he had learned in Japan, while his countrywide archaeological surveys are highly regarded in terms of academic history as well. What particularly sets his perspectives apart in archaeological terms is the fact that he raised the possibility of underwater tombs in ancient times, and also coined the term "Haemi Culture" as part of a theory of local culture aimed at furthering understanding of Bronze Age cultures in Korea. His input was simply breathtaking. In 1969, the National Research Institute of Cultural Heritage (NRICH) was founded and Changsan was appointed as its head. Despite the many difficulties he faced in running the institute with limited financial and human resources, he gave everything he had to research and field studies of the brilliant cultural heritages that Korea has preserved for so long. Changsan succeeded in restoring Bulguksa Temple, and followed this up with the successful excavation of the Cheonmachong Tomb and the Hwangnamdaechong Tomb in Gyeongju. He then explored the Hwangnyongsa Temple site, Bunhwangsa Temple, and the Mireuksa Temple site in order to systematically evaluate the Buddhist culture and structures of the Three Kingdoms Period. We can safely say that the large excavation projects that he organized and carried out at that time not only laid the foundations for Korean archeology but also made significant contributions to studies in related fields. Above all, in terms of the developmental process of Korean archeology, the achievements he generated with his exceptional passion during the period are almost too numerous to mention, but they include his systematization of various excavation methods, cultivation of archaeologists, popularization of archeological excavations, formalization of survey records, and promotion of data disclosure. On the other hand, although this "Excavation King" devoted himself to excavations, kept precise records, and paid keen attention to every detail, he failed to overcome the limitations of his era in the process of defining the nature of cultural remains and interpreting historical sites and structures. Despite his many roles in Korean archeology, the fact that he left behind a controversy over the identity of the occupant of the Hwangnamdaechong Tomb remains a sore spot in his otherwise perfect reputation.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

Construction and Application of Intelligent Decision Support System through Defense Ontology - Application example of Air Force Logistics Situation Management System (국방 온톨로지를 통한 지능형 의사결정지원시스템 구축 및 활용 - 공군 군수상황관리체계 적용 사례)

  • Jo, Wongi;Kim, Hak-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.77-97
    • /
    • 2019
  • The large amount of data that emerges from the initial connection environment of the Fourth Industrial Revolution is a major factor that distinguishes the Fourth Industrial Revolution from the existing production environment. This environment has two-sided features that allow it to produce data while using it. And the data produced so produces another value. Due to the massive scale of data, future information systems need to process more data in terms of quantities than existing information systems. In addition, in terms of quality, only a large amount of data, Ability is required. In a small-scale information system, it is possible for a person to accurately understand the system and obtain the necessary information, but in a variety of complex systems where it is difficult to understand the system accurately, it becomes increasingly difficult to acquire the desired information. In other words, more accurate processing of large amounts of data has become a basic condition for future information systems. This problem related to the efficient performance of the information system can be solved by building a semantic web which enables various information processing by expressing the collected data as an ontology that can be understood by not only people but also computers. For example, as in most other organizations, IT has been introduced in the military, and most of the work has been done through information systems. Currently, most of the work is done through information systems. As existing systems contain increasingly large amounts of data, efforts are needed to make the system easier to use through its data utilization. An ontology-based system has a large data semantic network through connection with other systems, and has a wide range of databases that can be utilized, and has the advantage of searching more precisely and quickly through relationships between predefined concepts. In this paper, we propose a defense ontology as a method for effective data management and decision support. In order to judge the applicability and effectiveness of the actual system, we reconstructed the existing air force munitions situation management system as an ontology based system. It is a system constructed to strengthen management and control of logistics situation of commanders and practitioners by providing real - time information on maintenance and distribution situation as it becomes difficult to use complicated logistics information system with large amount of data. Although it is a method to take pre-specified necessary information from the existing logistics system and display it as a web page, it is also difficult to confirm this system except for a few specified items in advance, and it is also time-consuming to extend the additional function if necessary And it is a system composed of category type without search function. Therefore, it has a disadvantage that it can be easily utilized only when the system is well known as in the existing system. The ontology-based logistics situation management system is designed to provide the intuitive visualization of the complex information of the existing logistics information system through the ontology. In order to construct the logistics situation management system through the ontology, And the useful functions such as performance - based logistics support contract management and component dictionary are further identified and included in the ontology. In order to confirm whether the constructed ontology can be used for decision support, it is necessary to implement a meaningful analysis function such as calculation of the utilization rate of the aircraft, inquiry about performance-based military contract. Especially, in contrast to building ontology database in ontology study in the past, in this study, time series data which change value according to time such as the state of aircraft by date are constructed by ontology, and through the constructed ontology, It is confirmed that it is possible to calculate the utilization rate based on various criteria as well as the computable utilization rate. In addition, the data related to performance-based logistics contracts introduced as a new maintenance method of aircraft and other munitions can be inquired into various contents, and it is easy to calculate performance indexes used in performance-based logistics contract through reasoning and functions. Of course, we propose a new performance index that complements the limitations of the currently applied performance indicators, and calculate it through the ontology, confirming the possibility of using the constructed ontology. Finally, it is possible to calculate the failure rate or reliability of each component, including MTBF data of the selected fault-tolerant item based on the actual part consumption performance. The reliability of the mission and the reliability of the system are calculated. In order to confirm the usability of the constructed ontology-based logistics situation management system, the proposed system through the Technology Acceptance Model (TAM), which is a representative model for measuring the acceptability of the technology, is more useful and convenient than the existing system.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.