• Title/Summary/Keyword: computer security

Search Result 6,048, Processing Time 0.032 seconds

A Study of the Monitoring Model for the Serious Civil Accidents (중대시민재해 모니터링 모델 연구)

  • ChangYeol Lee;GilJoo Park;Twehwan Kim;Jonggil Chae
    • Journal of the Society of Disaster Information
    • /
    • v.19 no.4
    • /
    • pp.834-843
    • /
    • 2023
  • Purpose: The Serious Civil Accidents consist of the public use facilities, the public transports, and the material and its products. According to the Serious Civil Accidents of the Serious Accidents Punishment Act, it must be constructed the safety and health management framework and execution system. In this study. we are design the model of the Serous Civil Accidents management and action system. Method: Firstly, we review from 8th article to 11th article of the enforcement ordinance of the Serious Accidents Punishment Act. From the articles, we design the visual and structural management system supporting the Act. Result: The Serious Civil Accidents apply to the system is consisted of 6 monitoring modules and 4 kinds DB modules. Conclusion: The Serious Civil Accidents are managed by the private enterprises, local governments, and public institutions. Specially, the CEO of restaurants, cafes, et al, do not know the detail information related to the Act. Also in case of the local governments, there are many facilities related the Act. It is not easy to the construct the management framework of the Act. This study provides the simple management structure for the Act.

Soil Moisture Estimation Using KOMPSAT-3 and KOMPSAT-5 SAR Images and Its Validation: A Case Study of Western Area in Jeju Island (KOMPSAT-3와 KOMPSAT-5 SAR 영상을 이용한 토양수분 산정과 결과 검증: 제주 서부지역 사례 연구)

  • Jihyun Lee;Hayoung Lee;Kwangseob Kim;Kiwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1185-1193
    • /
    • 2023
  • The increasing interest in soil moisture data from satellite imagery for applications in hydrology, meteorology, and agriculture has led to the development of methods to produce variable-resolution soil moisture maps. Research on accurate soil moisture estimation using satellite imagery is essential for remote sensing applications. The purpose of this study is to generate a soil moisture estimation map for a test area using KOMPSAT-3/3A and KOMPSAT-5 SAR imagery and to quantitatively compare the results with soil moisture data from the Soil Moisture Active Passive (SMAP) mission provided by NASA, with a focus on accuracy validation. In addition, the Korean Environmental Geographic Information Service (EGIS) land cover map was used to determine soil moisture, especially in agricultural and forested regions. The selected test area for this study is the western part of Jeju, South Korea, where input data were available for the soil moisture estimation algorithm based on the Water Cloud Model (WCM). Synthetic Aperture Radar (SAR) imagery from KOMPSAT-5 HV and Sentinel-1 VV were used for soil moisture estimation, while vegetation indices were calculated from the surface reflectance of KOMPSAT-3 imagery. Comparison of the derived soil moisture results with SMAP (L-3) and SMAP (L-4) data by differencing showed a mean difference of 4.13±3.60 p% and 14.24±2.10 p%, respectively, indicating a level of agreement. This research suggests the potential for producing highly accurate and precise soil moisture maps using future South Korean satellite imagery and publicly available data sources, as demonstrated in this study.

An improved technique for hiding confidential data in the LSB of image pixels using quadruple encryption techniques (4중 암호화 기법을 사용하여 기밀 데이터를 이미지 픽셀의 LSB에 은닉하는 개선된 기법)

  • Soo-Mok Jung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.1
    • /
    • pp.17-24
    • /
    • 2024
  • In this paper, we propose a highly secure technique to hide confidential data in image pixels using a quadruple encryption techniques. In the proposed technique, the boundary surface where the image outline exists and the flat surface with little change in pixel values are investigated. At the boundary of the image, in order to preserve the characteristics of the boundary, one bit of confidential data that has been multiply encrypted is spatially encrypted again in the LSB of the pixel located at the boundary to hide the confidential data. At the boundary of an image, in order to preserve the characteristics of the boundary, one bit of confidential data that is multiplely encrypted is hidden in the LSB of the pixel located at the boundary by spatially encrypting it. In pixels that are not on the border of the image but on a flat surface with little change in pixel value, 2-bit confidential data that is multiply encrypted is hidden in the lower 2 bits of the pixel using location-based encryption and spatial encryption techniques. When applying the proposed technique to hide confidential data, the image quality of the stego-image is up to 49.64dB, and the amount of confidential data hidden increases by up to 92.2% compared to the existing LSB method. Without an encryption key, the encrypted confidential data hidden in the stego-image cannot be extracted, and even if extracted, it cannot be decrypted, so the security of the confidential data hidden in the stego-image is maintained very strongly. The proposed technique can be effectively used to hide copyright information in general commercial images such as webtoons that do not require the use of reversible data hiding techniques.

Application of Geo-Segment Anything Model (SAM) Scheme to Water Body Segmentation: An Experiment Study Using CAS500-1 Images (수체 추출을 위한 Geo-SAM 기법의 응용: 국토위성영상 적용 실험)

  • Hayoung Lee;Kwangseob Kim;Kiwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.343-350
    • /
    • 2024
  • Since the release of Meta's Segment Anything Model (SAM), a large-scale vision transformer generation model with rapid image segmentation capabilities, several studies have been conducted to apply this technology in various fields. In this study, we aimed to investigate the applicability of SAM for water bodies detection and extraction using the QGIS Geo-SAM plugin, which enables the use of SAM with satellite imagery. The experimental data consisted of Compact Advanced Satellite 500 (CAS500)-1 images. The results obtained by applying SAM to these data were compared with manually digitized water objects, Open Street Map (OSM), and water body data from the National Geographic Information Institute (NGII)-based hydrological digital map. The mean Intersection over Union (mIoU) calculated for all features extracted using SAM and these three-comparison data were 0.7490, 0.5905, and 0.4921, respectively. For features commonly appeared or extracted in all datasets, the results were 0.9189, 0.8779, and 0.7715, respectively. Based on analysis of the spatial consistency between SAM results and other comparison data, SAM showed limitations in detecting small-scale or poorly defined streams but provided meaningful segmentation results for water body classification.

The identification of optimal data range for the discrimination between won and lost

  • Han, Doryung;Choi, Hyongjun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.7
    • /
    • pp.103-111
    • /
    • 2020
  • Performance indicators have often investigated and developed in order to identify foundational elements and factors for an enhancement of performance in sports. In order to identify the valid performance indicators it is important that the indicators used within a performance analysis system discriminate between the winning and losing performances within a match (Hughes and Bartlett, 2002). However, the performance indicators proposed in research studies on basketball performance have not been used for real-time analysis and feedback within a coaching context. Such real-time support for the coach and players has been described within research on other sports (Choi et al., 2004; O'Donoghue, 2001; Palmer et al., 1997). Within the process of real-time feedback, the identification of relevant performance indicators that distinguish winning and losing performances should be the first stage of the development of a real-time analysis system. Therefore, this study investigated the differences between winning and losing teams in terms of a set of performance indicators gathered during the analysis of 10 English National Basketball League matches. Winning and losing teams were compared using whole match data (N=10) as well as individual quarters (N=40). A series of Wilcoxon Signed Ranks tests was used to identify the relevant performance indicators that discriminate between winning and losing performers within whole matches and individual quarters. The tests found that 3 point shots made (p<0.05) and Assists (p<0.05) were significantly different between winning and losing teams within matches. However, 2 point shots made (p<0.05), 2 point shots attempted (P<0.05), percentages of 2 point shots scored (p<0.05), 3 point shots made (p<0.05), Defensive Rebounds (p<0.05) and Assists (p<0.05) were significantly different between winning and losing performance within quarters. The analysis task should be based on relevant performance indicators which explain the current performances to performance analysts and coaches. Within a real-time analysis and feedback scenario, this will have the additional benefit of supporting a decision based on immediate performance within the most recent quarter. Consequently, the real-time analysis system would use performance indicators which have the property of construct validity to support the decisions of the coach.

Design and Implementation of Medical Information System using QR Code (QR 코드를 이용한 의료정보 시스템 설계 및 구현)

  • Lee, Sung-Gwon;Jeong, Chang-Won;Joo, Su-Chong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.2
    • /
    • pp.109-115
    • /
    • 2015
  • The new medical device technologies for bio-signal information and medical information which developed in various forms have been increasing. Information gathering techniques and the increasing of the bio-signal information device are being used as the main information of the medical service in everyday life. Hence, there is increasing in utilization of the various bio-signals, but it has a problem that does not account for security reasons. Furthermore, the medical image information and bio-signal of the patient in medical field is generated by the individual device, that make the situation cannot be managed and integrated. In order to solve that problem, in this paper we integrated the QR code signal associated with the medial image information including the finding of the doctor and the bio-signal information. bio-signal. System implementation environment for medical imaging devices and bio-signal acquisition was configured through bio-signal measurement, smart device and PC. For the ROI extraction of bio-signal and the receiving of image information that transfer from the medical equipment or bio-signal measurement, .NET Framework was used to operate the QR server module on Window Server 2008 operating system. The main function of the QR server module is to parse the DICOM file generated from the medical imaging device and extract the identified ROI information to store and manage in the database. Additionally, EMR, patient health information such as OCS, extracted ROI information needed for basic information and emergency situation is managed by QR code. QR code and ROI management and the bio-signal information file also store and manage depending on the size of receiving the bio-singnal information case with a PID (patient identification) to be used by the bio-signal device. If the receiving of information is not less than the maximum size to be converted into a QR code, the QR code and the URL information can access the bio-signal information through the server. Likewise, .Net Framework is installed to provide the information in the form of the QR code, so the client can check and find the relevant information through PC and android-based smart device. Finally, the existing medical imaging information, bio-signal information and the health information of the patient are integrated over the result of executing the application service in order to provide a medical information service which is suitable in medical field.

Improved Original Entry Point Detection Method Based on PinDemonium (PinDemonium 기반 Original Entry Point 탐지 방법 개선)

  • Kim, Gyeong Min;Park, Yong Su
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.6
    • /
    • pp.155-164
    • /
    • 2018
  • Many malicious programs have been compressed or encrypted using various commercial packers to prevent reverse engineering, So malicious code analysts must decompress or decrypt them first. The OEP (Original Entry Point) is the address of the first instruction executed after returning the encrypted or compressed executable file back to the original binary state. Several unpackers, including PinDemonium, execute the packed file and keep tracks of the addresses until the OEP appears and find the OEP among the addresses. However, instead of finding exact one OEP, unpackers provide a relatively large set of OEP candidates and sometimes OEP is missing among candidates. In other words, existing unpackers have difficulty in finding the correct OEP. We have developed new tool which provides fewer OEP candidate sets by adding two methods based on the property of the OEP. In this paper, we propose two methods to provide fewer OEP candidate sets by using the property that the function call sequence and parameters are same between packed program and original program. First way is based on a function call. Programs written in the C/C++ language are compiled to translate languages into binary code. Compiler-specific system functions are added to the compiled program. After examining these functions, we have added a method that we suggest to PinDemonium to detect the unpacking work by matching the patterns of system functions that are called in packed programs and unpacked programs. Second way is based on parameters. The parameters include not only the user-entered inputs, but also the system inputs. We have added a method that we suggest to PinDemonium to find the OEP using the system parameters of a particular function in stack memory. OEP detection experiments were performed on sample programs packed by 16 commercial packers. We can reduce the OEP candidate by more than 40% on average compared to PinDemonium except 2 commercial packers which are can not be executed due to the anti-debugging technique.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

A Study on Fast Iris Detection for Iris Recognition in Mobile Phone (휴대폰에서의 홍채인식을 위한 고속 홍채검출에 관한 연구)

  • Park Hyun-Ae;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.19-29
    • /
    • 2006
  • As the security of personal information is becoming more important in mobile phones, we are starting to apply iris recognition technology to these devices. In conventional iris recognition, magnified iris images are required. For that, it has been necessary to use large magnified zoom & focus lens camera to capture images, but due to the requirement about low size and cost of mobile phones, the zoom & focus lens are difficult to be used. However, with rapid developments and multimedia convergence trends in mobile phones, more and more companies have built mega-pixel cameras into their mobile phones. These devices make it possible to capture a magnified iris image without zoom & focus lens. Although facial images are captured far away from the user using a mega-pixel camera, the captured iris region possesses sufficient pixel information for iris recognition. However, in this case, the eye region should be detected for accurate iris recognition in facial images. So, we propose a new fast iris detection method, which is appropriate for mobile phones based on corneal specular reflection. To detect specular reflection robustly, we propose the theoretical background of estimating the size and brightness of specular reflection based on eye, camera and illuminator models. In addition, we use the successive On/Off scheme of the illuminator to detect the optical/motion blurring and sunlight effect on input image. Experimental results show that total processing time(detecting iris region) is on average 65ms on a Samsung SCH-S2300 (with 150MHz ARM 9 CPU) mobile phone. The rate of correct iris detection is 99% (about indoor images) and 98.5% (about outdoor images).