• Title/Summary/Keyword: 서버

Search Result 9,422, Processing Time 0.031 seconds

A Design of Authentication Mechanism for Secure Communication in Smart Factory Environments (스마트 팩토리 환경에서 안전한 통신을 위한 인증 메커니즘 설계)

  • Joong-oh Park
    • Journal of Industrial Convergence
    • /
    • v.22 no.4
    • /
    • pp.1-9
    • /
    • 2024
  • Smart factories represent production facilities where cutting-edge information and communication technologies are fused with manufacturing processes, reflecting rapid advancements and changes in the global manufacturing sector. They capitalize on the integration of robotics and automation, the Internet of Things (IoT), and the convergence of artificial intelligence technologies to maximize production efficiency in various manufacturing environments. However, the smart factory environment is prone to security threats and vulnerabilities due to various attack techniques. When security threats occur in smart factories, they can lead to financial losses, damage to corporate reputation, and even human casualties, necessitating an appropriate security response. Therefore, this paper proposes a security authentication mechanism for safe communication in the smart factory environment. The components of the proposed authentication mechanism include smart devices, an internal operation management system, an authentication system, and a cloud storage server. The smart device registration process, authentication procedure, and the detailed design of anomaly detection and update procedures were meticulously developed. And the safety of the proposed authentication mechanism was analyzed, and through performance analysis with existing authentication mechanisms, we confirmed an efficiency improvement of approximately 8%. Additionally, this paper presents directions for future research on lightweight protocols and security strategies for the application of the proposed technology, aiming to enhance security.

A School-tailored High School Integrated Science Q&A Chatbot with Sentence-BERT: Development and One-Year Usage Analysis (인공지능 문장 분류 모델 Sentence-BERT 기반 학교 맞춤형 고등학교 통합과학 질문-답변 챗봇 -개발 및 1년간 사용 분석-)

  • Gyeongmo Min;Junehee Yoo
    • Journal of The Korean Association For Science Education
    • /
    • v.44 no.3
    • /
    • pp.231-248
    • /
    • 2024
  • This study developed a chatbot for first-year high school students, employing open-source software and the Korean Sentence-BERT model for AI-powered document classification. The chatbot utilizes the Sentence-BERT model to find the six most similar Q&A pairs to a student's query and presents them in a carousel format. The initial dataset, built from online resources, was refined and expanded based on student feedback and usability throughout over the operational period. By the end of the 2023 academic year, the chatbot integrated a total of 30,819 datasets and recorded 3,457 student interactions. Analysis revealed students' inclination to use the chatbot when prompted by teachers during classes and primarily during self-study sessions after school, with an average of 2.1 to 2.2 inquiries per session, mostly via mobile phones. Text mining identified student input terms encompassing not only science-related queries but also aspects of school life such as assessment scope. Topic modeling using BERTopic, based on Sentence-BERT, categorized 88% of student questions into 35 topics, shedding light on common student interests. A year-end survey confirmed the efficacy of the carousel format and the chatbot's role in addressing curiosities beyond integrated science learning objectives. This study underscores the importance of developing chatbots tailored for student use in public education and highlights their educational potential through long-term usage analysis.

Metadata extraction using AI and advanced metadata research for web services (AI를 활용한 메타데이터 추출 및 웹서비스용 메타데이터 고도화 연구)

  • Sung Hwan Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.2
    • /
    • pp.499-503
    • /
    • 2024
  • Broadcasting programs are provided to various media such as Internet replay, OTT, and IPTV services as well as self-broadcasting. In this case, it is very important to provide keywords for search that represent the characteristics of the content well. Broadcasters mainly use the method of manually entering key keywords in the production process and the archive process. This method is insufficient in terms of quantity to secure core metadata, and also reveals limitations in recommending and using content in other media services. This study supports securing a large number of metadata by utilizing closed caption data pre-archived through the DTV closed captioning server developed in EBS. First, core metadata was automatically extracted by applying Google's natural language AI technology. The next step is to propose a method of finding core metadata by reflecting priorities and content characteristics as core research contents. As a technology to obtain differentiated metadata weights, the importance was classified by applying the TF-IDF calculation method. Successful weight data were obtained as a result of the experiment. The string metadata obtained by this study, when combined with future string similarity measurement studies, becomes the basis for securing sophisticated content recommendation metadata from content services provided to other media.

Intelligent Transportation System (ITS) research optimized for autonomous driving using edge computing (엣지 컴퓨팅을 이용하여 자율주행에 최적화된 지능형 교통 시스템 연구(ITS))

  • Sunghyuck Hong
    • Advanced Industrial SCIence
    • /
    • v.3 no.1
    • /
    • pp.23-29
    • /
    • 2024
  • In this scholarly investigation, the focus is placed on the transformative potential of edge computing in enhancing Intelligent Transportation Systems (ITS) for the facilitation of autonomous driving. The intrinsic capability of edge computing to process voluminous datasets locally and in a real-time manner is identified as paramount in meeting the exigent requirements of autonomous vehicles, encompassing expedited decision-making processes and the bolstering of safety protocols. This inquiry delves into the synergy between edge computing and extant ITS infrastructures, elucidating the manner in which localized data processing can substantially diminish latency, thereby augmenting the responsiveness of autonomous vehicles. Further, the study scrutinizes the deployment of edge servers, an array of sensors, and Vehicle-to-Everything (V2X) communication technologies, positing these elements as constituents of a robust framework designed to support instantaneous traffic management, collision avoidance mechanisms, and the dynamic optimization of vehicular routes. Moreover, this research addresses the principal challenges encountered in the incorporation of edge computing within ITS, including issues related to security, the integration of data, and the scalability of systems. It proffers insights into viable solutions and delineates directions for future scholarly inquiry.

Evaluation of SUV Which was Estimated Using Mini PACS by PET/CT Scanners (PET/CT 장비 별 mini PACS에서 측정한 표준섭취계수(SUV)의 유용성 평가)

  • Park, Seung-Yong;Ko, Hyun-Soo;Kim, Jung-Sun;Jung, Woo-Young
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.47-52
    • /
    • 2011
  • Purpose: Facilities use own sever or mini PACS system for storage and analysis of the PET/CT data. Mini PACS can storage scan data as well as measuring SUV. Therefore, the study was performed to confirm whether or not measured SUV on mini PACS is measured equally on PET/CT workstation. Materials and Methods: In February 2011, 30 patients who were performed $^{18}F$-FDG wholebody PET/CT scan in Biograph 16, Biograph 40 and Discovery Ste 8 were enrolled. First, using each workstation, SUV in liver and aorta of mediastinum level was measured. Second, using mini PACS, SUV was measured by same method. Result: The correlation coefficient of SUV in liver between PET/CT scanner and min PACS in Biograph 16, Biograph 40, Discovery Ste 8 was 0.99, 0.98, 0.64 respectably, the correlation coefficient of SUV in aorta was 0.98, 0.98, 0.66, and these were showed positive correlation coefficient. Difference of SUV between Biograph workstation and mini PACS was not showed statistical significant difference at 5% level of significance. Difference of SUV between Discovery Ste 8 workstation and mini PACS was showed statistical significant difference at 5% level of significance. Conclusion: In case that patient was scanned by the other scanner, if the correction of SUV formula in mini PACS for each scanners is performed, mini PACS will be usefully used to provide consistently quantitative assessment.

  • PDF

Implementation Strategy of Global Framework for Climate Service through Global Initiatives in AgroMeteorology for Agriculture and Food Security Sector (선도적 농림기상 국제협력을 통한 농업과 식량안보분야 전지구기후 서비스체계 구축 전략)

  • Lee, Byong-Lyol;Rossi, Federica;Motha, Raymond;Stefanski, Robert
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.15 no.2
    • /
    • pp.109-117
    • /
    • 2013
  • The Global Framework on Climate Services (GFCS) will guide the development of climate services that link science-based climate information and predictions with climate-risk management and adaptation to climate change. GFCS structure is made up of 5 pillars; Observations/Monitoring (OBS), Research/ Modeling/ Prediction (RES), Climate Services Information System (CSIS) and User Interface Platform (UIP) which are all supplemented with Capacity Development (CD). Corresponding to each GFCS pillar, the Commission for Agricultural Meteorology (CAgM) has been proposing "Global Initiatives in AgroMeteorology" (GIAM) in order to facilitate GFCS implementation scheme from the perspective of AgroMeteorology - Global AgroMeteorological Outlook System (GAMOS) for OBS, Global AgroMeteorological Pilot Projects (GAMPP) for RES, Global Federation of AgroMeteorological Society (GFAMS) for UIP/RES, WAMIS next phase for CSIS/UIP, and Global Centers of Research and Excellence in AgroMeteorology (GCREAM) for CD, through which next generation experts will be brought up as virtuous cycle for human resource procurements. The World AgroMeteorological Information Service (WAMIS) is a dedicated web server in which agrometeorological bulletins and advisories from members are placed. CAgM is about to extend its service into a Grid portal to share computer resources, information and human resources with user communities as a part of GFCS. To facilitate ICT resources sharing, a specialized or dedicated Data Center or Production Center (DCPC) of WMO Information System for WAMIS is under implementation by Korea Meteorological Administration. CAgM will provide land surface information to support LDAS (Land Data Assimilation System) of next generation Earth System as an information provider. The International Society for Agricultural Meteorology (INSAM) is an Internet market place for agrometeorologists. In an effort to strengthen INSAM as UIP for research community in AgroMeteorology, it was proposed by CAgM to establish Global Federation of AgroMeteorological Society (GFAMS). CAgM will try to encourage the next generation agrometeorological experts through Global Center of Excellence in Research and Education in AgroMeteorology (GCREAM) including graduate programmes under the framework of GENRI as a governing hub of Global Initiatives in AgroMeteorology (GIAM of CAgM). It would be coordinated under the framework of GENRI as a governing hub for all global initiatives such as GFAMS, GAMPP, GAPON including WAMIS II, primarily targeting on GFCS implementations.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Implementation of An Automatic Authentication System Based on Patient's Situations and Its Performance Evaluation (환자상황 기반의 자동인증시스템 구축 및 성능평가)

  • Ham, Gyu-Sung;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.25-34
    • /
    • 2020
  • In the current medical information system, a system environment is constructed in which Biometric data generated by using IoT or medical equipment connected to a patient can be stored in a medical information server and monitored at the same time. Also, the patient's biometric data, medical information, and personal information after simple authentication using only the ID / PW via the mobile terminal of the medical staff are easily accessible. However, the method of accessing these medical information needs to be improved in the dimension of protecting patient's personal information, and provides a quick authentication system for first aid. In this paper, we implemented an automatic authentication system based on the patient's situation and evaluated its performance. Patient's situation was graded into normal and emergency situation, and the situation of the patient was determined in real time using incoming patient biometric data from the ward. If the patient's situation is an emergency, an emergency message including an emergency code is send to the mobile terminal of the medical staff, and they attempted automatic authentication to access the upper medical information of the patient. Automatic authentication is a combination of user authentication(ID/PW, emergency code) and mobile terminal authentication(medical staff's role, working hours, work location). After user authentication, mobile terminal authentication is proceeded automatically without additional intervention by medical staff. After completing all authentications, medical staffs get authorization according to the role of medical staffs and patient's situations, and can access to the patient's graded medical information and personal information through the mobile terminal. We protected the patient's medical information through limited medical information access by the medical staff according to the patient's situation, and provided an automatic authentication without additional intervention in an emergency situation. We performed performance evaluation to verify the performance of the implemented automatic authentication system.

Development of a Real-Time Mobile GIS using the HBR-Tree (HBR-Tree를 이용한 실시간 모바일 GIS의 개발)

  • Lee, Ki-Yamg;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.6 no.1 s.11
    • /
    • pp.73-85
    • /
    • 2004
  • Recently, as the growth of the wireless Internet, PDA and HPC, the focus of research and development related with GIS(Geographic Information System) has been changed to the Real-Time Mobile GIS to service LBS. To offer LBS efficiently, there must be the Real-Time GIS platform that can deal with dynamic status of moving objects and a location index which can deal with the characteristics of location data. Location data can use the same data type(e.g., point) of GIS, but the management of location data is very different. Therefore, in this paper, we studied the Real-Time Mobile GIS using the HBR-tree to manage mass of location data efficiently. The Real-Time Mobile GIS which is developed in this paper consists of the HBR-tree and the Real-Time GIS Platform HBR-tree. we proposed in this paper, is a combined index type of the R-tree and the spatial hash Although location data are updated frequently, update operations are done within the same hash table in the HBR-tree, so it costs less than other tree-based indexes Since the HBR-tree uses the same search mechanism of the R-tree, it is possible to search location data quickly. The Real-Time GIS platform consists of a Real-Time GIS engine that is extended from a main memory database system. a middleware which can transfer spatial, aspatial data to clients and receive location data from clients, and a mobile client which operates on the mobile devices. Especially, this paper described the performance evaluation conducted with practical tests if the HBR-tree and the Real-Time GIS engine respectively.

  • PDF

Probability-based Pre-fetching Method for Multi-level Abstracted Data in Web GIS (웹 지리정보시스템에서 다단계 추상화 데이터의 확률기반 프리페칭 기법)

  • 황병연;박연원;김유성
    • Spatial Information Research
    • /
    • v.11 no.3
    • /
    • pp.261-274
    • /
    • 2003
  • The effective probability-based tile pre-fetching algorithm and the collaborative cache replacement algorithm are able to reduce the response time for user's requests by transferring tiles which will be used in advance and determining tiles which should be removed from the restrictive cache space of a client based on the future access probabilities in Web GISs(Geographical Information Systems). The Web GISs have multi-level abstracted data for the quick response time when zoom-in and zoom-out queries are requested. But, the previous pre-fetching algorithm is applied on only two-dimensional pre-fetching space, and doesn't consider expanded pre-fetching space for multi-level abstracted data in Web GISs. In this thesis, a probability-based pre-fetching algorithm for multi-level abstracted in Web GISs was proposed. This algorithm expanded the previous two-dimensional pre-fetching space into three-dimensional one for pre-fetching tiles of the upper levels or lower levels. Moreover, we evaluated the effect of the proposed pre-fetching algorithm by using a simulation method. Through the experimental results, the response time for user requests was improved 1.8%∼21.6% on the average. Consequently, in Web GISs with multi-level abstracted data, the proposed pre-fetching algorithm and the collaborative cache replacement algorithm can reduce the response time for user requests substantially.

  • PDF