• Title/Summary/Keyword: Distance Training

Search Result 636, Processing Time 0.025 seconds

A STUDY OF INSERTION DEPTH OF GUTTA PERCHA CONES AFTER SHAPING BY NI-TI ROTARY FILES IN SIMULATED CANALS (레진모형 근관에서 Ni-Ti 파일로 근관성형 후 거타퍼챠콘의 근관내 삽입깊이에 대한 연구)

  • Cho, Hyun-Gu;Hwang, Yun-Chan;Hwang, In-Nam;Oh, Won-Mann
    • Restorative Dentistry and Endodontics
    • /
    • v.32 no.6
    • /
    • pp.550-558
    • /
    • 2007
  • The purpose of this study was to evaluate the insertion depth of several brands of master gutta percha cones after shaping by various Ni-Ti rotary files in simulated canals. Fifty resin simulated J-shape canals were instrumented with ProFile, ProTaper and HEROShaper. Simulated canals were prepared with ProFile .04 taper #25(n=10), .06 taper #25(n=10), ProTaper F2(n=10), HEROShaper .04 taper #25(n=10) and .06 taper #25(n=10). Size #25 gutta percha cones with a .04 & .06 taper from three different brands were used: DiaDent; META; Sure-endo. The gutta percha cones were selected and inserted into the prepared simulated canals. The distance from the apex of the prepared canal to the gutta percha cone tip was measured by image analysis program. Within limited data of this study, the results were as follows 1. When the simulated root canals were prepared with HEROShaper, gutta-percha cones were closely adapted to the root canal. 2. All brands of gutta percha cones fail to go to the prepared length in canal which was instrumented with ProFile, the cones extend beyond the prepared length in canal which was prepared with ProTaper. 3. In canal which was instrumented with HEROShaper .04 taper #25, Sure-endo .04 taper master gutta percha cone was well fitted(p < 0.05). 4. In canal which was instrumented with HEROShaper .06 taper #25, META .06 taper master gutta percha cone was well fitted(p < 0.05). As a result, we concluded that the insertion depth of all brands of master gutta percha cone do not match the rotary instrument, even though it was prepared by crown-down technique, as recommended by the manufacturer. Therefore, the master cone should be carefully selected to match the depth of the prepared canal for adequate obturation.

Beauty Shop Workers' Views of Job (미용사의 직무만족도와 직업관)

  • Oh, Ai-Ja;Nam, Chul-Hyun
    • The Journal of Korean Society for School & Community Health Education
    • /
    • v.2 no.1
    • /
    • pp.67-84
    • /
    • 2001
  • This study was conducted to examine beauty shop workers' views of job. Data were collected from the workers in Seoul, Daegu, Pohang, Junjoo, and Kimhae from June 1, 2000 to August 31, 2000. The results of this study are summarized as follows. 1. According to general characteristics of the subjects, 28,7% of them was female; 94.2% 'specialized in hair'; 46.4% 'below twenty nine years old'; 47.1% 'married'; 59.7% 'highschool graduate'; 33.9% 'worked for below three years'; 28.5% 'monthly income of five hundred thousand to nine hundred ninety thousand won'; 62.3% 'working for above twelve hours a day' ; 41.0% 'above five workers' ; 40.6% 'working in city'. 2. 54.8% of the respondents thought that they were in good health. 76.3% of them smoked and 54.8% drank. 62.8% of them did not exercise and 78.7% was under stress. 61.5% responded that they chose the job because of its possibility of professional vocation. 91.0% of them obtained the beauty skill from beauty schools. 3. Among the factors which influenced job satisfaction, 'stable job and life security' was highest(43.9%), while 'interest in the job and amount of pay' was lowest(3.2%). 'Personal ability and use of originality' was 19.4% and 'harmonious relationship with fellow workers' was 18.1%. 'Job environment' was 7.1% and 'harmonious relationship with higher workers' was 4.5%. 4. The level workers' view of job was $113.8{\pm}17.3$ points on the basis of 150 points. On the basis of 75 points, each item showed it points in order of self-development($22.3{\pm}3.8$), service for customers($20.1{\pm}3.1$), vocational mission($15.6{\pm}3.1$), harmony with the others($18.9{\pm}3.5$), working environment($18.6{\pm}3.6$), and working condition($14.3{\pm}5.1$). 5. Among the reasons why they considered leaving the job, 24.0% of them considered it because they could not free time, while 15.4% considered it because undesirable living environment or long distance from home. 15.0% thought it because they could not receive proper treatment as much as they worked and 12.8% thought they overworked. 6. When they move into new working places, they consider such factors as good working environment(24.1%), good place to open their own beauty shops(16.7%), good beauty shop to learn beauty skill(15.6%), chance to have job training(9.5%), and close place from home(9.0%). 7. 40.6% of the respondents wanted to leave the job, while 32.3% of them did not want to leave the job. The intention of leaving the displayed significant difference in the variables of age, working period, monthly income, marital status, the number of workers, location of the shop, rank, and reason of selecting the job. 8. According to the results of a regression analysis of factors which influenced job satisfaction, it was affected significantly by intention of leaving job, the number of workers, health condition, level of stress, and monthly income. The beauty shop workers showed low satisfaction level with working environment, working condition, and working mission, They considered leaving the job because of lack of free time, overwork, poor working environment, improper treatment, etc. Therefore, related professionals and organizations must device adequate measures in order to make them work with pride as creators of beauty.

  • PDF

The Analysis of Future Promising Industries of Busan and Marine Policy in the Era of the Northern Sea Route (북극항로 시대에 대비한 부산지역의 미래성장 유망산업 및 정책 평가에 관한 연구)

  • Ryoo, Dong-Keun;Nam, Hyung-Sik
    • Journal of Korea Port Economic Association
    • /
    • v.30 no.1
    • /
    • pp.175-194
    • /
    • 2014
  • Because the thawing of the Arctic ocean is slowly accelerating due to global warming, recently exploring resources in Arctic ocean and transporting resources by using the North Pole route have been getting spotlight. Since the original route transported by the Suez Canal from Korea to Europe could be shorten about 8,000km in distance(decreased about 38% compared to the original route), which means shortening about 10 voyage dates, it is expected to bring huge logistics cost reduction. Once the North Pole route is commercialized successfully, it would be one of the most important variables that affects future of Busan port and guides for economic development of Busan. Therefore, the purpose of this study is to analyze Busan port and the economic growth of Busan area by researching promising industry, based on the effect of freight transporting by the Northern sea route on the economy of Busan. For this study, questionnaire surveys and interviews were conducted for 64 people of experts in the shipping and port industry, relevant government, and academics. The survey finding shows that port logistics industry is a promising business in Busan in terms of its growth and competitiveness. It is necessary to develop feeder network facilities that prepare for commercialization of the Northern sea route as a short and medium term plan and provide professional manpower training in polar regions. Ship supply business would also play an important role. It is identified that revitalization of shipbuilding and ocean plant industry should be done in terms of Arctic business. With regard to the fishery industry it is found that modernization of fishery ship and development of fishery equipment used in polar areas should be carried out.

Region of Interest Extraction and Bilinear Interpolation Application for Preprocessing of Lipreading Systems (입 모양 인식 시스템 전처리를 위한 관심 영역 추출과 이중 선형 보간법 적용)

  • Jae Hyeok Han;Yong Ki Kim;Mi Hye Kim
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.189-198
    • /
    • 2024
  • Lipreading is one of the important parts of speech recognition, and several studies have been conducted to improve the performance of lipreading in lipreading systems for speech recognition. Recent studies have used method to modify the model architecture of lipreading system to improve recognition performance. Unlike previous research that improve recognition performance by modifying model architecture, we aim to improve recognition performance without any change in model architecture. In order to improve the recognition performance without modifying the model architecture, we refer to the cues used in human lipreading and set other regions such as chin and cheeks as regions of interest along with the lip region, which is the existing region of interest of lipreading systems, and compare the recognition rate of each region of interest to propose the highest performing region of interest In addition, assuming that the difference in normalization results caused by the difference in interpolation method during the process of normalizing the size of the region of interest affects the recognition performance, we interpolate the same region of interest using nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation, and compare the recognition rate of each interpolation method to propose the best performing interpolation method. Each region of interest was detected by training an object detection neural network, and dynamic time warping templates were generated by normalizing each region of interest, extracting and combining features, and mapping the dimensionality reduction of the combined features into a low-dimensional space. The recognition rate was evaluated by comparing the distance between the generated dynamic time warping templates and the data mapped to the low-dimensional space. In the comparison of regions of interest, the result of the region of interest containing only the lip region showed an average recognition rate of 97.36%, which is 3.44% higher than the average recognition rate of 93.92% in the previous study, and in the comparison of interpolation methods, the bilinear interpolation method performed 97.36%, which is 14.65% higher than the nearest neighbor interpolation method and 5.55% higher than the bicubic interpolation method. The code used in this study can be found a https://github.com/haraisi2/Lipreading-Systems.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.