• Title/Summary/Keyword: Education of Computer Programming

Search Result 779, Processing Time 0.026 seconds

A Study on "Wittgenstein" Album (비트겐슈타인(Wittgenstein)앨범에 관한 고찰)

  • Kim, Jun-Soo;Cho, Tae-seon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.3
    • /
    • pp.374-380
    • /
    • 2021
  • Band Wittgenstein is a relatively band-shaped team since Shin Hae-Chul's previous big band "Next." The album, which features Shin Hae-Chul's unique lyrics and specific concepts, is also similar to the Next albums. However, there is a difference in sounds used on the album that are properly fused sampling based work and computer music. This album is a low-budget home recording album produced at a total cost of 3 million won. Shin Hae-Chul was in charge of the main vocals and programming, and all of the works were done together by the band members. In this album, Shin Hae-Chul focused on teamwork rather than producing his own music. The low budget could have been a constraint on music production, but it must be highly appreciated for it being a novel attempt. Musicians who create music always create conflicts between their favorite music and popular ones. However, without creative efforts, there is no evolution or development in the music industry. It is clear that constant changes can continue to develop musical ability, which leads to the development of Korean pop music.

Item Analysis of information-related foundation in the Japanese National Center Test for University Admissions (일본 대학입시센터시험 정보관계기초 문항 분석)

  • Hahm, Seung-Yeon
    • 대한공업교육학회지
    • /
    • v.35 no.2
    • /
    • pp.182-203
    • /
    • 2010
  • The purpose of this study was to analyze of Information-related subjects on industry department of college scholastic ability test in Korea and Japan. These were compared with information-related foundation and data-technology foundation, programming on industry department test of vocational education area in college scholastic ability test in Korea and Japan and suggest implications of items development of college scholastic ability test in Korea. Based on the results of study, the following recommendations were made for new direction of items development of college scholastic ability test in Korea. First, Information-related foundation on industry department of National Center for University Entrance Examinations in Japan consisted of basic informations of agricultural, industry, commercial department etc. of vocational education area. Similarly it is necessary to introduce 'computer-related foundation' consist of common contents of several departments of college scholastic ability test in Korea. Second, it is necessary to diverse sub-item situations different from main item situations and introduce diverse situations of set type items of college scholastic ability test in Korea. Third, test for National Center for University Entrance Examinations in Japan consisted of several types items like this selecting answers on multi-answer group. it is necessary to introduce short answer type, completion type and supply type of college scholastic ability test in Korea.

  • PDF

Evaluation of Present Curriculum for Devlopment of Dept. of Radiological Science Curriculum (방사선학과 교육과정 개선을 위한 현 교육과정 평가)

  • Kang, Se-Sik;Kim, Chang-Soo;Choi, Seok-Yoon;Ko, Seong-Jin;Kim, Jung-Hoon
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.5
    • /
    • pp.242-251
    • /
    • 2011
  • A curriculum of study demands a change as period of time and society evolve. Therefore, at this point where changes are required, this study is to analyze and evaluate the curriculums which will enhance and improve current studies as a preceding stage. The research was based on the survey by groups of education experts and 19 universities with current curriculum of study in radiologic science, and their references. The study was focused on the scope of work by radiologic technologist, change of college systems, academic research about radiologic science, and the improvement and the future of radiologic science field in perspective to globalization and the digital era. In terms of work scope, angiography and interventional radiology at 6 to 8 schools, fluoroscopy at 4 schools, ultrasound and practices at 6 schools, magnetic resonance image at 2 schools were found to be unestablished. The basic medical subjects, humuan physiology, human anatomy and practices, medical terminology courses were set up at most schools; however, pathology at 5 schools, image anatomy at 6 schools, clinical medicine at 11 schools were yet opened. Among the basic science and engineering subjects, general biology and its practices at 11 schools, general physics and its practices at 14 schools, and general chemistry and its practices at 8 schools were established which is about a half from a total number of schools. Only 4-5 schools established digital subjects such as, health computer, computer programming, PACS which are the basic major subjects. In order to provide academic improvement in radiologic science, digitalized education and globalization, and basis for future-oriented education for the field of radiologic science, including expanded scope of work, it is acknowledged that curriculums that are opened and run at each school need to be standardized. Therefore, the need for introduction of certificate for the radiologic science education courses are suggested.

Development of Menu Labeling System (MLS) Using Nutri-API (Nutrition Analysis Application Programming Interface) (영양분석 API를 이용한 메뉴 라벨링 시스템 (MLS) 개발)

  • Hong, Soon-Myung;Cho, Jee-Ye;Park, Yu-Jeong;Kim, Min-Chan;Park, Hye-Kyung;Lee, Eun-Ju;Kim, Jong-Wook;Kwon, Kwang-Il;Kim, Jee-Young
    • Journal of Nutrition and Health
    • /
    • v.43 no.2
    • /
    • pp.197-206
    • /
    • 2010
  • Now a days, people eat outside of the home more and more frequently. Menu labeling can help people make more informed decisions about the foods they eat and help them maintain a healthy diet. This study was conducted to develop menu labeling system using Nutri-API (Nutrition Analysis Application Programming Interface). This system offers convenient user interface and menu labeling information with printout format. This system provide useful functions such as new food/menu nutrients information, retrieval food semantic service, menu plan with subgroup and nutrient analysis informations and print format. This system provide nutritive values with nutrient information and ratio of 3 major energy nutrients. MLS system can analyze nutrients for menu and each subgroup. And MLS system can display nutrient comparisons with DRIs and % Daily Nutrient Values. And also this system provide 6 different menu labeling formate with nutrient information. Therefore it can be used by not only usual people but also dietitians and restaurant managers who take charge of making a menu and experts in the field of food and nutrition. It is expected that Menu Labeling System (MLS) can be useful of menu planning and nutrition education, nutrition counseling and expert meal management.

Research about CAVE Practical Use Way Through Culture Content's Restoration Process that Utilize CAVE (가상현실시스템(CAVE)을 활용한 문화 Content의 복원 과정을 통한 CAVE활용 방안에 대한 연구)

  • Kim, Tae-Yul;Ryu, Seuc-Ho;Hur, Yung-Ju
    • Journal of Korea Game Society
    • /
    • v.4 no.3
    • /
    • pp.11-20
    • /
    • 2004
  • Virtual reality that we have seen from the movies in 80's and 90's is hawing near based on the rapid progress of science together with a computer technology. Various virtual reality system developments (such as VRML, HMD FishTank, Wall Type, CAVE Type, and so on) and the advancement of those systems make for the embodiment of virtual reality that gives more sense of the real. Virtual reality is so immersive that makes people feel like they are in that environment and enable them to manipulate without experiencing the environment at first hand that is hard to experience in reality. Virtual reality can be applied to the spheres, such as education, high-level programming, remote control, surface exploration of the remote satellite, analysis of exploration data, scientific visualization, and so on. For some connote examples, there are training of a tank and an aeroplane operation, fumiture layout design, surgical operation practice, game, and so on. In these virtual reality systems, the actual operation of the human participant and virtual workspace are connected each other to the hardware that stimulates the five senses adequately to lend the sense of the immersion. There are still long way to go, however, before long it will be possible to have the same feeling in the virtual reality as human being can have by further study and effort. In this thesis, the basic definition, the general idea, and the kind of virtual reality were discussed. Especially, CAVE typed in reality that is highly immersive was analyzed in definition, and then the method of VR programming and modeling in the virtual reality system were suggested by showing the restoration process of Kyongbok Palace (as the content of the original form of the culture) that was made by KISTI(Korea Institute of Science and Technology Information) in 2003 through design process in virtual reality system. Through these processes, utilization of the immersive virtual reality system was discussed and how to take advantage of this CAVE typed virtual reality system at the moment was studied. In closing the problems that had been exposed in the process of the restoration of the cultural property were described and the utilization plan of the virtual reality system was suggested.

  • PDF

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Exploring Pre-Service Earth Science Teachers' Understandings of Computational Thinking (지구과학 예비교사들의 컴퓨팅 사고에 대한 인식 탐색)

  • Young Shin Park;Ki Rak Park
    • Journal of the Korean earth science society
    • /
    • v.45 no.3
    • /
    • pp.260-276
    • /
    • 2024
  • The purpose of this study is to explore whether pre-service teachers majoring in earth science improve their perception of computational thinking through STEAM classes focused on engineering-based wave power plants. The STEAM class involved designing the most efficient wave power plant model. The survey on computational thinking practices, developed from previous research, was administered to 15 Earth science pre-service teachers to gauge their understanding of computational thinking. Each group developed an efficient wave power plant model based on the scientific principal of turbine operation using waves. The activities included problem recognition (problem solving), coding (coding and programming), creating a wave power plant model using a 3D printer (design and create model), and evaluating the output to correct errors (debugging). The pre-service teachers showed a high level of recognition of computational thinking practices, particularly in "logical thinking," with the top five practices out of 14 averaging five points each. However, participants lacked a clear understanding of certain computational thinking practices such as abstraction, problem decomposition, and using bid data, with their comprehension of these decreasing after the STEAM lesson. Although there was a significant reduction in the misconception that computational thinking is "playing online games" (from 4.06 to 0.86), some participants still equated it with "thinking like a computer" and "using a computer to do calculations". The study found slight improvements in "problem solving" (3.73 to 4.33), "pattern recognition" (3.53 to 3.66), and "best tool selection" (4.26 to 4.66). To enhance computational thinking skills, a practice-oriented curriculum should be offered. Additional STEAM classes on diverse topics could lead to a significant improvement in computational thinking practices. Therefore, establishing an educational curriculum for multisituational learning is essential.

Perceptions of Information Technology Competencies among Gifted and Non-gifted High School Students (영재와 평재 고등학생의 IT 역량에 대한 인식)

  • Shin, Min;Ahn, Doehee
    • Journal of Gifted/Talented Education
    • /
    • v.25 no.2
    • /
    • pp.339-358
    • /
    • 2015
  • This study was to examine perceptions of information technology(IT) competencies among gifted and non-gifted students(i.e., information science high school students and technical high school students). Of the 370 high school students surveyed from 3 high schools(i.e., gifted academy, information science high school, and technical high school) in three metropolitan cities, Korea, 351 students completed and returned the questionnaires yielding a total response rate of 94.86%. High school students recognized the IT professional competence as being most important when recruiting IT employees. And they considered that practice-oriented education was the most importantly needed to improve their IT skills. In addition, the most important sub-factors of IT core competencies among gifted academy students and information science high school students were basic software skills. Also Technical high school students responded that the main network and security capabilities were the most importantly needed to do so. Finally, the most appropriate training courses for enhancing IT competencies were recognized differently among gifted and non-gifted students. Gifted academy students responded that the 'algorithm' was the mostly needed for enhancing IT competencies, whereas information science high school students responded that 'data structures' and 'computer architecture' were mostly needed to do. For technical high school students, they responded that a 'programming language' course was the most needed to do so. Results are discussed in relations to IT corporate and school settings.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.