• Title/Summary/Keyword: Real number system

Search Result 1,743, Processing Time 0.031 seconds

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A Study of the Health Service Computerization State and the Occupational Nurses's Satisfaction Level on Computerization (산업간호현장의 보건업무 전산화시스템 활용현황과 산업간호사의 전산화 직무만족도 연구)

  • Jung, Hee Young;Park, Hyoung-Sook
    • Korean Journal of Occupational Health Nursing
    • /
    • v.13 no.1
    • /
    • pp.5-18
    • /
    • 2004
  • This study aims to investigate the use state of the health service computerization system in the occupational nursing field and the occupational nursers' satisfaction level, and provide basic data to promote the development of the health service computerization system for the nursing field. For this study, a questionnaire was provided to 118 occupational nurses who belong to Busan and Gyeongnam branches of KAOHN(Korean Association of Occupational Health Nurses) for 2 months (from Dec. 1, 2002 to Jan. 31, 2003). A tool of Choi Yong-Heui(2000) was used to investigate the satisfaction level of using the health service computerization system. The collected materials were analyzed in real number and percentage, average and standard deviation, t-test and ANOVA by using the SPSS WIN 10.0 program. This study is summarized as follows: 1. The average age was $31.99{\pm}5.58$ old in this study. The married were 54.2%. Participants who graduated from a junior college was 76.9%. The average service period was $4.48{\pm}4.68$ years. In service types, 79.7% of participants served in a health care center. The average service period was $3.22{\pm}2.89$ years. The service place which had 1000 workers or more was 35.6%. 2. Only 20.3% of participants in this study had a computer use education. 3. The field who participants used mostly was communication/internet, $3.29{\pm}.85$ hours in average. 4. 97.1% of occupational fields had computers and peripheral devices: 71.4% in pentium computer, 42.8% in the hard disk capacity of 20-29GB, 60.0% in 15 inch monitors, 86.2% in printers, 18.1% in digital cameras, 12.4% in LAN, and 9.5% in scanners. 80.1% of the occupational fields which were objects of study could use communication. 5. The occupational fields which did not introduced the health service computerization system were 62.8%. The main cause was attributable to entrepreneurs' insufficient recognition 66.6%. 51.5% of the entrepreneurs did not have an introduction plan. 37.2% of participating companies had the health service computerization system. 56.4% of them introduced it since the year 2000. 81.6% of the introduction motivation aimed to the efficiency of health service. The most issue upon introduction was insufficient understanding of a person in charge - 25.6%. The in-house development of the system covered 56.4%. 61.5% of the participants accepted their demands from the first stage of development. The direct effect of computerization showed the increase of 25.9% in the quickness and continuity of service treatment, and 25.9% in the serviceability of statistical treatment. 6. 22.0% of the participants had a computerization system use education. 69.2% of them had a in-house education. An educational method by nurses who used the computerization system was 76.9%. 92.3% of the education was helpful for practical duties. 7. An analysis of the computer use by health service fields showed that the medicine management in a health management field was 15.9%. the work environment measuring management in a work environment filed was 32.9%. the employment. general and special examination management in a heal th management field was 61.1 %. the various reports management in an administrative field was 64%. the health education data preparation management in an educational field was 58.0%. and the medicine and expendables management in an equipment management field was 51.6%. An analysis of the computerization system use showed that the various statistical data manage in a health management field was 13.0%. the work environment measuring management in a health management field was 34.8%. the personal disease management in a health management field was 51.9%. the heal education data preparation management in an educational field was 54.5%. and the equipment management of health care centers in an equipment management field was 52.6%. 8. 31.6% of the participants wanted that health service computerization system would include the generals of health services. 42.4% of the participants thought that first of all. the aggressive interest and investment of employers were required to build the health service computerization system. 9. The participants' satisfaction level on the computerization system use was $3.51{\pm}.57$ points. An analysis by each factor showed $3.62{\pm}.68$ points in a service change factor. $3.15{\pm}.63$ points in a computer program use factor, and $3.45{\pm}.71$ points in a continuous computerization use factor. 10. An analysis of the computerization system use by general characteristics of participants showed that the married (p = .022) had the satisfaction level higher than the unmarried. 11. The satisfaction level of the computerization system use by participants' computer use ability tended to be higher in proportion to the increase of computer use abilities in spreadsheet (F=2.606. p=.048). presentation (F=3.62. p=.012) and communication/internet(F=2.885. p=.0321. Based on the study results mentioned above. I will suggest as follows : The nationwide enlargement and repetition study is required for occupational nurses who serve in occupational nursing fields. The computerization system in a health service field is inferior comparing with other fields. The computerization system standard by business types and characteristics should be prepared through employers's aggressive participation and national support. Therefore various statistical data which occurs in occupational fields will be managed systematically and efficiently. A regular and systematic computer education plan for occupational nurses in charge of health services in the filed is urgently required to efficiently manage and improve the health of on-site workers.

  • PDF

Usefulness assessment of secondary shield for the lens exposure dose reduction during radiation treatment of peripheral orbit (안와 주변 방사선 치료 시 수정체 피폭선량 감소를 위한 2차 차폐의 유용성 평가)

  • Kwak, Yong Kuk;Hong, Sun Gi;Ha, Min Yong;Park, Jang Pil;Yoo, Sook Hyun;Cho, Woong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.1
    • /
    • pp.87-95
    • /
    • 2015
  • Purpose : This study presents the usefulness assessment of secondary shield for the lens exposure dose reduction during radiation treatment of peripheral orbit. Materials and Methods : We accomplished IMRT treatment plan similar with a real one through the computed treatment planning system after CT simulation using human phantom. For the secondary shield, we used Pb plate (thickness 3mm, diameter 25mm) and 3 mm tungsten eye-shield block. And we compared lens dose using OSLD between on TPS and on simulation. Also, we irradiated 200 MU(6 MV, SPD(Source to Phantom Distance)=100 cm, $F{\cdot}S\;5{\times}5cm$) on a 5cm acrylic phantom using the secondary shielding material of same condition, 3mm Pb and tungsten eye-shield block. And we carried out the same experiment using 8cm Pb block to limit effect of leakage & transmitted radiation out of irradiation field. We attached OSLD with a 1cm away from the field at the side of phantom and applied a 3mm bolus equivalent to the thickness of eyelid. Results : Using human phantom, the Lens dose on IMRT treatment plan is 315.9cGy and the real measurement value is 216.7cGy. And after secondary shield using 3mm Pb plate and tungsten eye-shield block, each lens dose is 234.3, 224.1 cGy. The result of a experiment using acrylic phantom, each value is 5.24, 5.42 and 5.39 cGy in case of no block, 3mm Pb plate and tungsten eye-shield block. Applying O.S.B out of the field, each value is 1.79, 2.00 and 2.02 cGy in case of no block, 3mm Pb plate and tungsten eye-shield block. Conclusion : When secondary shielding material is used to protect critical organ while irradiating photon, high atomic number material (like metal) that is near by critical organ can be cause of dose increase according to treatment region and beam direction because head leakage and collimator & MLC transmitted radiation are exist even if it's out of the field. The attempt of secondary shield for the decrease of exposure dose was meaningful, but untested attempt can have a reverse effect. So, a preliminary inspection through Q.A must be necessary.

  • PDF

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

A Study on Legal and Regulatory Improvement Direction of Aeronautical Obstacle Management System for Aviation Safety (항공안전을 위한 장애물 제한표면 관리시스템의 법·제도적 개선방향에 관한 소고)

  • Park, Dam-Yong
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.31 no.2
    • /
    • pp.145-176
    • /
    • 2016
  • Aviation safety can be secured through regulations and policies of various areas and thorough execution of them on the field. Recently, for aviation safety management Korea is making efforts to prevent aviation accidents by taking various measures: such as selecting and promoting major strategic goals for each sector; establishing National Aviation Safety Program, including the Second Basic Plan for Aviation Policy; and improving aviation related legislations. Obstacle limitation surface is to be established and publicly notified to ensure safe take-off and landing as well as aviation safety during the circling of aircraft around airports. This study intends to review current aviation obstacle management system which was designed to make sure that buildings and structures do not exceed the height of obstacle limitation surface and identify its operating problems based on my field experience. Also, in this study, I would like to propose ways to improve the system in legal and regulatory aspects. Nowadays, due to the request of residents in the vicinity of airports, discussions and studies on aviational review are being actively carried out. Also, related ordinance and specific procedures will be established soon. However, in addition to this, I would like to propose the ways to improve shortcomings of current system caused by the lack of regulations and legislations for obstacle management. In order to execute obstacle limitation surface regulation, there has to be limits on constructing new buildings, causing real restriction for the residents living in the vicinity of airports on exercising their property rights. In this sense, it is regarded as a sensitive issue since a number of related civil complaints are filed and swift but accurate decision making is required. According to Aviation Act, currently airport operators are handling this task under the cooperation with local governments. Thus, administrative activities of local governments that have the authority to give permits for installation of buildings and structures are critically important. The law requires to carry out precise surveying of vast area and to report the outcome to the government every five years. However, there can be many problems, such as changes in the number of obstacles due to the error in the survey, or failure to apply for consultation with local governments on the exercise of construction permission. However, there is neither standards for allowable errors, preventive measures, nor penalty for the violation of appropriate procedures. As such, only follow-up measures can be taken. Nevertheless, once construction of a building is completed violating the obstacle limitation surface, practically it is difficult to take any measures, including the elimination of the building, because the owner of the building would have been following legal process for the construction by getting permit from the government. In order to address this problem, I believe penalty provision for the violation of Aviation Act needs to be added. Also, it is required to apply the same standards of allowable error stipulated in Building Act to precise surveying in the aviation field. Hence, I would like to propose the ways to improve current system in an effective manner.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

The Trend of Cigarette Design and Tobacco Flavor System Development

  • Wu, Jimmy Z.
    • Journal of the Korean Society of Tobacco Science
    • /
    • v.24 no.1
    • /
    • pp.67-73
    • /
    • 2002
  • In light of addressing consumer health concern, coping with anti-tobacco movement, and promoting new product, tobacco industry is actively pursuing to make a new generation of cigarettes with low tar and nicotine deliveries, and less harmful substances. Low tar and low nicotine cigarettes increases their market shares dramatically world wide, especially in KT&G, multinational tobacco companies, EU countries, even in China regulated by CNTC to set up yearly target to lower tar and nicotine deliveries. On the other hand, to design a new cigarette with reduced harmful substances begins to gain speed. The "modified Hoffmann list" publishes thirty plus substances in tobacco leaf and main smoke stream, which is the prime suspect causing health problems. Various ways and means are developed to reduce such components including new tobacco breeds, new curing method, tobacco leaf treatment before processing, selected filtration system, innovated casing system to reduce free radicals, as well as some non conventional cigarette products. In TSRC held this year, the main topic is related to reduce tobacco specific nitrosamines in tobacco leaf. The new generation of cigarette is in the horizon. It still needs a lot help to produce commercial products with satisfied taste and aroma characters. The flavor industry is not regulated by many governments demanding which ingredients might or might not be for tobacco use. However, most of the cigarette companies self impose a list of ingredients to guide flavor suppliers to design flavors. Unfortunately, the number of ingredients in those lists is getting shorter every year. It is understandable that the health is not the only reason. Some cigarette companies are playing safe to protect the company from potential lawsuit, while others are just copying from their competitors. Moreover, it is obvious that it needs more assistance from casings and flavors to design new generation of cigarettes with missing certain flavor components in tobacco leaf and main smoke stream. These flavor components are either non-existed or at lower level at new form of cured tobacco leaf or filtered in the main smoke stream along with reduced harmful substances. The use of carbon filters and other selected filtration system poses another tough task for flavor system design. Specific flavor components are missing from the smoke analysis data, which brings a notion of "carbon taste" and "dryness" of mouth feel. It is ever more demanded by cigarette industry to flavor suppliers to produce flavors as body enhancer, tobacco notes, salivating agents, harshness reducer, and various of aromatic notes provided they are safe to use. Another trend is that water based flavor or flavor with reduced ethanol as solvent is gaining popularity. It is preferred by some cigarette companies that the flavor is compounded with all natural ingredients or all ingredients should he GMO free. The new generation of cigarettes demands many ways of new thinking process. It is also vital for tobacco industry. It reflects the real needs for the consumers that the cigarette product should be safe to use as well as bearing the taste and aroma characters smokers always enjoyed. An effective tobacco flavor system is definitely a part of the equation. The global trend of tobacco industry is like trends of any other industries lead by consumer needs, benefited with new technology availability, affected by the global economy, and subjected for various rules and regulations. Anti-tobacco organizations and media exceptionally scrutinize cigarette, as a legal commercial product. Cigarette is probably the most studied commercial product for its composition, structure, deliveries, effects, as well as its new developmental trend. Therefore, any new trend of cigarette development would be within these boundaries. This paper is trying to point out what it would be like for tobacco industry in the next few yews and what concerns the tobacco industry. It focuses mostly on the efforts to produce safer cigarettes. It is such a vital task for the tobacco industry and its affiliate industries such as cigarette papers, filters, flavors, and other materials. The facts and knowledge presented in this paper might be well known for the public. Some of the comments and predictions are very much personal opinion for a further discussion.

Management and Use of Oral History Archives on Forced Mobilization -Centering on oral history archives collected by the Truth Commission on Forced Mobilization under the Japanese Imperialism Republic of Korea- (강제동원 구술자료의 관리와 활용 -일제강점하강제동원피해진상규명위원회 소장 구술자료를 중심으로-)

  • Kwon, Mi-Hyun
    • The Korean Journal of Archival Studies
    • /
    • no.16
    • /
    • pp.303-339
    • /
    • 2007
  • "The damage incurred from forced mobilization under the Japanese Imperialism" means the life, physical, and property damage suffered by those who were forced to lead a life as soldiers, civilians attached to the military, laborers, and comfort women forcibly mobilized by the Japanese Imperialists during the period between the Manchurian Incident and the Pacific War. Up to the present time, every effort to restore the history on such a compulsory mobilization-borne damage has been made by the damaged parties, bereaved families, civil organizations, and academic circles concerned; as a result, on March 5, 2004, Disclosure act of Forced Mobilization under the Japanese Imperialism[part of it was partially revised on May 17, 2007]was officially established and proclaimed. On the basis of this law, the Truth Commission on Forced Mobilization under the Japanese Imperialism Republic of Korea[Compulsory Mobilization Commission hence after] was launched under the jurisdiction of the Prime Minister on November 10, 2004. Since February 1, 2005, this organ has begun its work with the aim of looking into the real aspects of damage incurred from compulsory mobilization under the Japanese Imperialism, by which making the historical truth open to the world. The major business of this organ is to receive the damage report and investigation of the reported damage[examination of the alleged victims and bereaved families, and decision-making], receipt of the application for the fact-finding & fact finding; fact finding and matters impossible to make judgment; correction of a family register subsequent to the damage judgement; collection & analysis of data concerning compulsory mobilization at home and from abroad and writing up of a report; exhumation of the remains, remains saving, their repatriation, and building project for historical records hall and museum & memorial place, etc. The Truth Commission on Compulsory Mobilization has dug out and collected a variety of records to meet the examination of the damage and fact finding business. As is often the case with other history of damage, the records which had already been made open to the public or have been newly dug out usually have their limits to ascertaining of the diverse historical context involved in compulsory mobilization in their quantity or quality. Of course, there may happen a case where the interested parties' story can fill the vacancy of records or has its foundational value more than its related record itself. The Truth Commission on Compulsory mobilization generated a variety of oral history records through oral interviews with the alleged damage-suffered survivors and puts those data to use for examination business, attempting to make use of those data for public use while managing those on a systematic method. The Truth Commission on compulsory mobilization-possessed oral history archives were generated based on a drastic planning from the beginning of their generation, and induced digital medium-based production of those data while bearing the conveniences of their management and usage in mind from the stage of production. In addition, in order to surpass the limits of the oral history archives produced in the process of the investigating process, this organ conducted several special training sessions for the interviewees and let the interviewees leave their real context in time of their oral testimony in an interview journal. The Truth Commission on compulsory mobilization isn't equipped with an extra records management system for the management of the collected archives. The digital archives are generated through the management system of the real aspects of damage and electronic approval system, and they plays a role in registering and searching the produced, collected, and contributed records. The oral history archives are registered at the digital archive and preserved together with real records. The collected oral history archives are technically classified at the same time of their registration and given a proper number for registration, classification, and keeping. The Truth Commission on compulsory mobilization has continued its publication of oral history archives collection for the positive use of them and is also planning on producing an image-based matters. The oral history archives collected by this organ are produced, managed and used in as positive a way as possible surpassing the limits produced in the process of investigation business and budgetary deficits as well as the absence of records management system, etc. as the form of time-limit structure. The accumulated oral history archives, if a historical records hall and museum should be built as regulated in Disclosure act of forced mobilization, would be more systematically managed and used for the public users.

A Study on Nursing Service of Chronic Diseases by the First Step and Third Step Medical Treatment (1차 및 3차 진료기관 이용 만성질환자의 간호서비스에 관한 연구)

  • Cho Chong Sook
    • Journal of Korean Public Health Nursing
    • /
    • v.10 no.2
    • /
    • pp.103-118
    • /
    • 1996
  • It is to be growing up the interest of community health affairs through visiting nursing care. The health medical treatment of Korea has been changed largely on the period. The juvenile population has decreased. This means that is has took the population consensus of advanced national organization to be increased by the old age. The transition of disease has changed from the contagious disease importance to the chronicity disease omportance because the domestic district population has experienced the sudden urbanization circumstance district population has experienced the sudden urbanization circumstance to be growing up $70\%$ of the whole population. When the nursing service has common function to be delivering from all direction to home, this study is getting the great important phase velocity in order to manage the kernel questional adult chronicity disease of health medical institution at the present age. (1) community over system or with people particularity (2) the first of third step medical treatments. The variety of medical treatments organization has quantity of the delivery manpower and specially between consumers and rdlated person. A qualitative difference is showed at the purpose to be seizing. That research related person is use at district health center in Seoul, by foundation on nurse registration book of H collage hospital and public health registration book. According the chronicity disease. age. and sex. nature agree-able standard 54 people took the content analysis on nurse registration book of total 108 people. The results of the study were as follows: 1. General background factors are houses or kind of medical facilities and number of patients in family. The first medical treatment is more patients than third medical treatment organization. The first medical treatment of economic environment os appering to be worse. 2. The chronicity disease frequency have been different speciality according to medical treatment organization. On case of the first medical treatment. Diabetes and High Blood Pressure were good but Cerebrum Vascular Accident(CVA) showed many for bed case. In addition. the number of family is comparative large exception of CVA on according for moving condition and health more than the first medical treatment. However. family condition. whole family percentage is decreasing preferably through the potential resource is increasing by the number of and the construction of family. The ability of real resource is considered to be low. 3. The average percentage of nurse service has appered to be differed two groups by the first step medical treatment(33.72 times) and third step medical treatment(45.70 times). However, the difference (the first step medical treatment and third step medical treatment) is to be limited to issue the medicine at the service. The condition of nurse care was the indirect nursing care. Supportiong area was to be related to volunteer service and administration support. 4. The various nursing care average percentage of the chronicity disease was increased by orders of Diabetes. High Blood Pressure. and CVA in examination result and the medical treatment. The indirect nursing care was also same. At third step medical treatment, orders of chronicity disease were same. The case of other area on service conditions were increased by order of Diabetes. High Blood Pressure, and CVA. However. it is never appearing the difference at bottleneck affairs nursing care. 5. When the visiting nursing care demand particularly. the average percentage of nursing care from the first step medical treatment that the time under a person is many more than the time over two people. However, there was no difference in statistic. Third step medical treatment is $49.81\%$ at the time under a person. The average nursing care service is appeared by more many when the visiting nursing care demand is a few by 12.83 at the time over two people. 6. By visiting nursing care percentage to be frequency that nursing care averaghe percentage and inter-relation are large. The related factor of the first medical treatment is 0.96. However, the related factor of third medical treatment has shown the decreased 0.49 for the condition of relation more than that. Therefore. the nursing care average percentage is related to the visiting times of a nurse. This result is be showing the obvious fact that the first step medical treatment is few more than third step medical treatment.

  • PDF

Keyword Network Analysis for Technology Forecasting (기술예측을 위한 특허 키워드 네트워크 분석)

  • Choi, Jin-Ho;Kim, Hee-Su;Im, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.227-240
    • /
    • 2011
  • New concepts and ideas often result from extensive recombination of existing concepts or ideas. Both researchers and developers build on existing concepts and ideas in published papers or registered patents to develop new theories and technologies that in turn serve as a basis for further development. As the importance of patent increases, so does that of patent analysis. Patent analysis is largely divided into network-based and keyword-based analyses. The former lacks its ability to analyze information technology in details while the letter is unable to identify the relationship between such technologies. In order to overcome the limitations of network-based and keyword-based analyses, this study, which blends those two methods, suggests the keyword network based analysis methodology. In this study, we collected significant technology information in each patent that is related to Light Emitting Diode (LED) through text mining, built a keyword network, and then executed a community network analysis on the collected data. The results of analysis are as the following. First, the patent keyword network indicated very low density and exceptionally high clustering coefficient. Technically, density is obtained by dividing the number of ties in a network by the number of all possible ties. The value ranges between 0 and 1, with higher values indicating denser networks and lower values indicating sparser networks. In real-world networks, the density varies depending on the size of a network; increasing the size of a network generally leads to a decrease in the density. The clustering coefficient is a network-level measure that illustrates the tendency of nodes to cluster in densely interconnected modules. This measure is to show the small-world property in which a network can be highly clustered even though it has a small average distance between nodes in spite of the large number of nodes. Therefore, high density in patent keyword network means that nodes in the patent keyword network are connected sporadically, and high clustering coefficient shows that nodes in the network are closely connected one another. Second, the cumulative degree distribution of the patent keyword network, as any other knowledge network like citation network or collaboration network, followed a clear power-law distribution. A well-known mechanism of this pattern is the preferential attachment mechanism, whereby a node with more links is likely to attain further new links in the evolution of the corresponding network. Unlike general normal distributions, the power-law distribution does not have a representative scale. This means that one cannot pick a representative or an average because there is always a considerable probability of finding much larger values. Networks with power-law distributions are therefore often referred to as scale-free networks. The presence of heavy-tailed scale-free distribution represents the fundamental signature of an emergent collective behavior of the actors who contribute to forming the network. In our context, the more frequently a patent keyword is used, the more often it is selected by researchers and is associated with other keywords or concepts to constitute and convey new patents or technologies. The evidence of power-law distribution implies that the preferential attachment mechanism suggests the origin of heavy-tailed distributions in a wide range of growing patent keyword network. Third, we found that among keywords that flew into a particular field, the vast majority of keywords with new links join existing keywords in the associated community in forming the concept of a new patent. This finding resulted in the same outcomes for both the short-term period (4-year) and long-term period (10-year) analyses. Furthermore, using the keyword combination information that was derived from the methodology suggested by our study enables one to forecast which concepts combine to form a new patent dimension and refer to those concepts when developing a new patent.