• Title/Summary/Keyword: Computer programming

Search Result 2,150, Processing Time 0.03 seconds

An Examination of the Course Syllabi Related to Data Librarian in the ALA-accredited Library and Information Science Degree Programs (ALA인가 문헌정보학 학위 과정의 데이터 사서 양성과 관련된 교과목의 강의계획서 분석)

  • Hyoungjoo Park
    • Journal of Korean Library and Information Science Society
    • /
    • v.54 no.4
    • /
    • pp.307-334
    • /
    • 2023
  • The purpose of this study is to examine the status of data librarian-related course syllabi in the 2023 American Library Association(ALA)-accredited degree programs in Library and Information Science (LIS). The present study examined LIS course syllabi related to data librarian including course titles, course objectives, course descriptions, weekly topics and assignments. ALA-accredited LIS programs offer various courses in data librarianship such as data management and curation, data analysis and visualization, metadata, information services, research methods, library management, academic libraries, computer programming and databases. This study collected 184 syllabi from the ALA-accredited LIS programs and selected and analyzed 127 syllabi that are related to data librarianship. The study examined 3,045 course titles, 2,559 course description from 61 LIS degree programs overseas, and 1,330 course titles from 37 LIS degree programs in Korea. This study found that LIS degree programs both in Korea and overseas offer various courses for data librarians. The researcher hopes the findings of this study will be used as a starting point to develop or redesign courses related to data librarianship in the information field.

A Study on the Fatigue Strength of the Welded Joints in Steel Structures(II) (강구조물(鋼構造物)의 용접연결부(鎔接連結部)의 피로강도(疲勞强度)에 관한 연구(研究)(II))

  • Park, Je Seon;Chung, Yeong Wha;Chang, Dong Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.6 no.1
    • /
    • pp.1-11
    • /
    • 1986
  • Welded connectors of the cover plates, the transverse stiffeners of the plate girders, and the gusset plates of the plates girders or box girders, were selected as studying objects. A simplified method of drawing the S-N curves in these welded joints by a computer program without the direct fatigue tests was established. The plots on the S-N curve using the values from the practical fatigue tests were compared with the results from the method of the computer programming. The results of these studies are as follows. It appeared that the fatigue life by calculation method was a little less than the practical fatigue life from the actual tests. The latter values included both life $N_c$ of occurrence of initial crack $a_i$ and the life $N_p$ of propagation of critical crack. On the other hand, the former values included only the life $N_p$. Therefore, these results should be considered as justifiable ones. Since the difference between the two results was not significant, the results by calculation method should be in the conservation side when the safety of the structures was considered. Consequently, the results by calculation method should be applicable to the fracture fatigue design of structure. For reference, the same fatigue tests were performed with the specimens of 3 pieces in each case made of the low-strength steel, SS 41. The results went unexpected showing that the fatigue strength was lower in the case of low-strength steel. That is, in the case of the cover plate, the fatigue strength became slowly higher than the case of high-strength steel, SWS 50. That was observed when the maximum testing stress was higher than $14kg/mm^2$. In addition, in the case of the transverse stiffener, the fatique strength became rapidly higher than the case of SWS 50. That was observed when the maximum testing stress was lower than $31kg/mm^2$. It was thought that more such fatigue tests should be performed for more reliable results.

  • PDF

Evaluation of Present Curriculum for Devlopment of Dept. of Radiological Science Curriculum (방사선학과 교육과정 개선을 위한 현 교육과정 평가)

  • Kang, Se-Sik;Kim, Chang-Soo;Choi, Seok-Yoon;Ko, Seong-Jin;Kim, Jung-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.5
    • /
    • pp.242-251
    • /
    • 2011
  • A curriculum of study demands a change as period of time and society evolve. Therefore, at this point where changes are required, this study is to analyze and evaluate the curriculums which will enhance and improve current studies as a preceding stage. The research was based on the survey by groups of education experts and 19 universities with current curriculum of study in radiologic science, and their references. The study was focused on the scope of work by radiologic technologist, change of college systems, academic research about radiologic science, and the improvement and the future of radiologic science field in perspective to globalization and the digital era. In terms of work scope, angiography and interventional radiology at 6 to 8 schools, fluoroscopy at 4 schools, ultrasound and practices at 6 schools, magnetic resonance image at 2 schools were found to be unestablished. The basic medical subjects, humuan physiology, human anatomy and practices, medical terminology courses were set up at most schools; however, pathology at 5 schools, image anatomy at 6 schools, clinical medicine at 11 schools were yet opened. Among the basic science and engineering subjects, general biology and its practices at 11 schools, general physics and its practices at 14 schools, and general chemistry and its practices at 8 schools were established which is about a half from a total number of schools. Only 4-5 schools established digital subjects such as, health computer, computer programming, PACS which are the basic major subjects. In order to provide academic improvement in radiologic science, digitalized education and globalization, and basis for future-oriented education for the field of radiologic science, including expanded scope of work, it is acknowledged that curriculums that are opened and run at each school need to be standardized. Therefore, the need for introduction of certificate for the radiologic science education courses are suggested.

Exploring Pre-Service Earth Science Teachers' Understandings of Computational Thinking (지구과학 예비교사들의 컴퓨팅 사고에 대한 인식 탐색)

  • Young Shin Park;Ki Rak Park
    • Journal of the Korean earth science society
    • /
    • v.45 no.3
    • /
    • pp.260-276
    • /
    • 2024
  • The purpose of this study is to explore whether pre-service teachers majoring in earth science improve their perception of computational thinking through STEAM classes focused on engineering-based wave power plants. The STEAM class involved designing the most efficient wave power plant model. The survey on computational thinking practices, developed from previous research, was administered to 15 Earth science pre-service teachers to gauge their understanding of computational thinking. Each group developed an efficient wave power plant model based on the scientific principal of turbine operation using waves. The activities included problem recognition (problem solving), coding (coding and programming), creating a wave power plant model using a 3D printer (design and create model), and evaluating the output to correct errors (debugging). The pre-service teachers showed a high level of recognition of computational thinking practices, particularly in "logical thinking," with the top five practices out of 14 averaging five points each. However, participants lacked a clear understanding of certain computational thinking practices such as abstraction, problem decomposition, and using bid data, with their comprehension of these decreasing after the STEAM lesson. Although there was a significant reduction in the misconception that computational thinking is "playing online games" (from 4.06 to 0.86), some participants still equated it with "thinking like a computer" and "using a computer to do calculations". The study found slight improvements in "problem solving" (3.73 to 4.33), "pattern recognition" (3.53 to 3.66), and "best tool selection" (4.26 to 4.66). To enhance computational thinking skills, a practice-oriented curriculum should be offered. Additional STEAM classes on diverse topics could lead to a significant improvement in computational thinking practices. Therefore, establishing an educational curriculum for multisituational learning is essential.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Perceptions of Information Technology Competencies among Gifted and Non-gifted High School Students (영재와 평재 고등학생의 IT 역량에 대한 인식)

  • Shin, Min;Ahn, Doehee
    • Journal of Gifted/Talented Education
    • /
    • v.25 no.2
    • /
    • pp.339-358
    • /
    • 2015
  • This study was to examine perceptions of information technology(IT) competencies among gifted and non-gifted students(i.e., information science high school students and technical high school students). Of the 370 high school students surveyed from 3 high schools(i.e., gifted academy, information science high school, and technical high school) in three metropolitan cities, Korea, 351 students completed and returned the questionnaires yielding a total response rate of 94.86%. High school students recognized the IT professional competence as being most important when recruiting IT employees. And they considered that practice-oriented education was the most importantly needed to improve their IT skills. In addition, the most important sub-factors of IT core competencies among gifted academy students and information science high school students were basic software skills. Also Technical high school students responded that the main network and security capabilities were the most importantly needed to do so. Finally, the most appropriate training courses for enhancing IT competencies were recognized differently among gifted and non-gifted students. Gifted academy students responded that the 'algorithm' was the mostly needed for enhancing IT competencies, whereas information science high school students responded that 'data structures' and 'computer architecture' were mostly needed to do. For technical high school students, they responded that a 'programming language' course was the most needed to do so. Results are discussed in relations to IT corporate and school settings.

User Centered Interface Design of Web-based Attention Testing Tools: Inhibition of Return(IOR) and Graphic UI (웹 기반 주의력 검사의 사용자 인터페이스 설계: 회귀억제 과제와 그래픽 UI를 중심으로)

  • Kwahk, Ji-Eun;Kwak, Ho-Wan
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.4
    • /
    • pp.331-367
    • /
    • 2008
  • This study aims to validate a web-based neuropsychological testing tool developed by Kwak(2007) and to suggest solutions to potential problems that can deteriorate its validity. When it targets a wider range of subjects, a web-based neuropsychological testing tool is challenged by high drop-out rates, lack of motivation, lack of interactivity with the experimenter, fear of computer, etc. As a possible solution to these threats, this study aims to redesign the user interface of a web-based attention testing tool through three phases of study. In Study 1, an extensive analysis of Kwak's(2007) attention testing tool was conducted to identify potential usability problems. The Heuristic Walkthrough(HW) method was used by three usability experts to review various design features. As a result, many problems were found throughout the tool. The findings concluded that the design of instructions, user information survey forms, task screen, results screen, etc. did not conform to the needs of users and their tasks. In Study 2, 11 guidelines for the design of web-based attention testing tools were established based on the findings from Study 1. The guidelines were used to optimize the design and organization of the tool so that it fits to the user and task needs. The resulting new design alternative was then implemented as a working prototype using the JAVA programming language. In Study 3, a comparative study was conducted to demonstrate the excellence of the new design of attention testing tool(named graphic style tool) over the existing design(named text style tool). A total of 60 subjects participated in user testing sessions where their error frequency, error patterns, and subjective satisfaction were measured through performance observation and questionnaires. Through the task performance measurement, a number of user errors in various types were observed in the existing text style tool. The questionnaire results were also in support of the new graphic style tool, users rated the new graphic style tool higher than the existing text style tool in terms of overall satisfaction, screen design, terms and system information, ease of learning, and system performance.

  • PDF

Social Tagging-based Recommendation Platform for Patented Technology Transfer (특허의 기술이전 활성화를 위한 소셜 태깅기반 지적재산권 추천플랫폼)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.53-77
    • /
    • 2015
  • Korea has witnessed an increasing number of domestic patent applications, but a majority of them are not utilized to their maximum potential but end up becoming obsolete. According to the 2012 National Congress' Inspection of Administration, about 73% of patents possessed by universities and public-funded research institutions failed to lead to creating social values, but remain latent. One of the main problem of this issue is that patent creators such as individual researcher, university, or research institution lack abilities to commercialize their patents into viable businesses with those enterprises that are in need of them. Also, for enterprises side, it is hard to find the appropriate patents by searching keywords on all such occasions. This system proposes a patent recommendation system that can identify and recommend intellectual rights appropriate to users' interested fields among a rapidly accumulating number of patent assets in a more easy and efficient manner. The proposed system extracts core contents and technology sectors from the existing pool of patents, and combines it with secondary social knowledge, which derives from tags information created by users, in order to find the best patents recommended for users. That is to say, in an early stage where there is no accumulated tag information, the recommendation is done by utilizing content characteristics, which are identified through an analysis of key words contained in such parameters as 'Title of Invention' and 'Claim' among the various patent attributes. In order to do this, the suggested system extracts only nouns from patents and assigns a weight to each noun according to the importance of it in all patents by performing TF-IDF analysis. After that, it finds patents which have similar weights with preferred patents by a user. In this paper, this similarity is called a "Domain Similarity". Next, the suggested system extract technology sector's characteristics from patent document by analyzing the international technology classification code (International Patent Classification, IPC). Every patents have more than one IPC, and each user can attach more than one tag to the patents they like. Thus, each user has a set of IPC codes included in tagged patents. The suggested system manages this IPC set to analyze technology preference of each user and find the well-fitted patents for them. In order to do this, the suggeted system calcuates a 'Technology_Similarity' between a set of IPC codes and IPC codes contained in all other patents. After that, when the tag information of multiple users are accumulated, the system expands the recommendations in consideration of other users' social tag information relating to the patent that is tagged by a concerned user. The similarity between tag information of perferred 'patents by user and other patents are called a 'Social Simialrity' in this paper. Lastly, a 'Total Similarity' are calculated by adding these three differenent similarites and patents having the highest 'Total Similarity' are recommended to each user. The suggested system are applied to a total of 1,638 korean patents obtained from the Korea Industrial Property Rights Information Service (KIPRIS) run by the Korea Intellectual Property Office. However, since this original dataset does not include tag information, we create virtual tag information and utilized this to construct the semi-virtual dataset. The proposed recommendation algorithm was implemented with JAVA, a computer programming language, and a prototype graphic user interface was also designed for this study. As the proposed system did not have dependent variables and uses virtual data, it is impossible to verify the recommendation system with a statistical method. Therefore, the study uses a scenario test method to verify the operational feasibility and recommendation effectiveness of the system. The results of this study are expected to improve the possibility of matching promising patents with the best suitable businesses. It is assumed that users' experiential knowledge can be accumulated, managed, and utilized in the As-Is patent system, which currently only manages standardized patent information.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.