• Title/Summary/Keyword: logs

Search Result 716, Processing Time 0.028 seconds

Tracking of cryptocurrency moved through blockchain Bridge (블록체인 브릿지를 통해 이동한 가상자산의 추적 및 검증)

  • Donghyun Ha;Taeshik Shon
    • Journal of Platform Technology
    • /
    • v.11 no.3
    • /
    • pp.32-44
    • /
    • 2023
  • A blockchain bridge (hereinafter referred to as "bridge") is a service that enables the transfer of assets between blockchains. A bridge accepts virtual assets from users and delivers the same virtual assets to users on other blockchains. Users use bridges because they cannot transfer assets to other blockchains in the usual way because each blockchain environment is independent. Therefore, the movement of assets through bridges is not traceable in the usual way. If a malicious actor moves funds through a bridge, existing asset tracking tools are limited in their ability to trace it. Therefore, this paper proposes a method to obtain information on bridge usage by identifying the structure of the bridge and analyzing the event logs of bridge requests. First, to understand the structure of bridges, we analyzed bridges operating on Ethereum Virtual Machine(EVM) based blockchains. Based on the analysis, we applied the method to arbitrary bridge events. Furthermore, we created an automated tool that continuously collects and stores bridge usage information so that it can be used for actual tracking. We also validated the automated tool and tracking method based on an asset transfer scenario. By extracting the usage information through the tool after using the bridge, we were able to check important information for tracking, such as the sending blockchain, the receiving blockchain, the receiving wallet address, and the type and quantity of tokens transferred. This showed that it is possible to overcome the limitations of tracking asset movements using blockchain bridges.

  • PDF

D4AR - A 4-DIMENSIONAL AUGMENTED REALITY - MODEL FOR AUTOMATION AND VISUALIZATION OF CONSTRUCTION PROGRESS MONITORING

  • Mani Golparvar-Fard;Feniosky Pena-Mora
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.30-31
    • /
    • 2009
  • Early detection of schedule delay in field construction activities is vital to project management. It provides the opportunity to initiate remedial actions and increases the chance of controlling such overruns or minimizing their impacts. This entails project managers to design, implement, and maintain a systematic approach for progress monitoring to promptly identify, process and communicate discrepancies between actual and as-planned performances as early as possible. Despite importance, systematic implementation of progress monitoring is challenging: (1) Current progress monitoring is time-consuming as it needs extensive as-planned and as-built data collection; (2) The excessive amount of work required to be performed may cause human-errors and reduce the quality of manually collected data and since only an approximate visual inspection is usually performed, makes the collected data subjective; (3) Existing methods of progress monitoring are also non-systematic and may also create a time-lag between the time progress is reported and the time progress is actually accomplished; (4) Progress reports are visually complex, and do not reflect spatial aspects of construction; and (5) Current reporting methods increase the time required to describe and explain progress in coordination meetings and in turn could delay the decision making process. In summary, with current methods, it may be not be easy to understand the progress situation clearly and quickly. To overcome such inefficiencies, this research focuses on exploring application of unsorted daily progress photograph logs - available on any construction site - as well as IFC-based 4D models for progress monitoring. Our approach is based on computing, from the images themselves, the photographer's locations and orientations, along with a sparse 3D geometric representation of the as-built scene using daily progress photographs and superimposition of the reconstructed scene over the as-planned 4D model. Within such an environment, progress photographs are registered in the virtual as-planned environment, allowing a large unstructured collection of daily construction images to be interactively explored. In addition, sparse reconstructed scenes superimposed over 4D models allow site images to be geo-registered with the as-planned components and consequently, a location-based image processing technique to be implemented and progress data to be extracted automatically. The result of progress comparison study between as-planned and as-built performances can subsequently be visualized in the D4AR - 4D Augmented Reality - environment using a traffic light metaphor. In such an environment, project participants would be able to: 1) use the 4D as-planned model as a baseline for progress monitoring, compare it to daily construction photographs and study workspace logistics; 2) interactively and remotely explore registered construction photographs in a 3D environment; 3) analyze registered images and quantify as-built progress; 4) measure discrepancies between as-planned and as-built performances; and 5) visually represent progress discrepancies through superimposition of 4D as-planned models over progress photographs, make control decisions and effectively communicate those with project participants. We present our preliminary results on two ongoing construction projects and discuss implementation, perceived benefits and future potential enhancement of this new technology in construction, in all fronts of automatic data collection, processing and communication.

  • PDF

A Case Study on the Improvement of Sociality in Convergence Theater Education : Focusing on a Theatrical Camp Milmi-ri Village School (융합연극교육의 사회성 발달 증진 효과사례 분석 : 연극캠프 밀미리 마을학교를 중심으로)

  • Kim, Jung-Sun;Bae, Jin-Sup
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.6
    • /
    • pp.221-233
    • /
    • 2019
  • The purpose of this study tries to show the effectiveness of integrated theater education for low-grade students who lack social skills with the direction of art education based on 'theatrical and educational theater' and 'theater therapy' in the revised curriculum in 2015. To do this, the researcher organized a theater camp organized by the social, cultural and arts education institute in Seoul for a total of 12 low-grade elementary school students for four weeks, and had a presentation session. In addition, the researcher observed the behavioral changes of the participating children to verify the social development of the theater education and after the conclusion, the final results were obtained through the in-depth interviews of the participants' parents and teachers. As a result, it was found that Social Skills Rating System(SSRS)' test, teachers' observation logs, interviews, and responses of participating children, which are the main subjects of this activity, had a great effect on improving their sociality and relationship. Therefore, the researcher tried to prove through this paper that theater education is effective in enhancing social skills as creative, personality and convergence education in preparation for the 4th industrial era and that it is important to solve communication problems such as the absence of human relationships.

Log Collection Method for Efficient Management of Systems using Heterogeneous Network Devices (이기종 네트워크 장치를 사용하는 시스템의 효율적인 관리를 위한 로그 수집 방법)

  • Jea-Ho Yang;Younggon Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.119-125
    • /
    • 2023
  • IT infrastructure operation has advanced, and the methods for managing systems have become widely adopted. Recently, research has focused on improving system management using Syslog. However, utilizing log data collected through these methods presents challenges, as logs are extracted in various formats that require expert analysis. This paper proposes a system that utilizes edge computing to distribute the collection of Syslog data and preprocesses duplicate data before storing it in a central database. Additionally, the system constructs a data dictionary to classify and count data in real-time, with restrictions on transmitting registered data to the central database. This approach ensures the maintenance of predefined patterns in the data dictionary, controls duplicate data and temporal duplicates, and enables the storage of refined data in the central database, thereby securing fundamental data for big data analysis. The proposed algorithms and procedures are demonstrated through simulations and examples. Real syslog data, including extracted examples, is used to accurately extract necessary information from log data and verify the successful execution of the classification and storage processes. This system can serve as an efficient solution for collecting and managing log data in edge environments, offering potential benefits in terms of technology diffusion.

Implementation of Security Information and Event Management for Realtime Anomaly Detection and Visualization (실시간 이상 행위 탐지 및 시각화 작업을 위한 보안 정보 관리 시스템 구현)

  • Kim, Nam Gyun;Park, Sang Seon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.5
    • /
    • pp.303-314
    • /
    • 2018
  • In the past few years, government agencies and corporations have succumbed to stealthy, tailored cyberattacks designed to exploit vulnerabilities, disrupt operations and steal valuable information. Security Information and Event Management (SIEM) is useful tool for cyberattacks. SIEM solutions are available in the market but they are too expensive and difficult to use. Then we implemented basic SIEM functions to research and development for future security solutions. We focus on collection, aggregation and analysis of real-time logs from host. This tool allows parsing and search of log data for forensics. Beyond just log management it uses intrusion detection and prioritize of security events inform and support alerting to user. We select Elastic Stack to process and visualization of these security informations. Elastic Stack is a very useful tool for finding information from large data, identifying correlations and creating rich visualizations for monitoring. We suggested using vulnerability check results on our SIEM. We have attacked to the host and got real time user activity for monitoring, alerting and security auditing based this security information management.

A Design of Timestamp Manipulation Detection Method using Storage Performance in NTFS (NTFS에서 저장장치 성능을 활용한 타임스탬프 변조 탐지 기법 설계)

  • Jong-Hwa Song;Hyun-Seob Lee
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.23-28
    • /
    • 2023
  • Windows operating system generates various logs with timestamps. Timestamp tampering is an act of anti-forensics in which a suspect manipulates the timestamps of data related to a crime to conceal traces, making it difficult for analysts to reconstruct the situation of the incident. This can delay investigations or lead to the failure of obtaining crucial digital evidence. Therefore, various techniques have been developed to detect timestamp tampering. However, there is a limitation in detection if a suspect is aware of timestamp patterns and manipulates timestamps skillfully or alters system artifacts used in timestamp tampering detection. In this paper, a method is designed to detect changes in timestamps, even if a suspect alters the timestamp of a file on a storage device, it is challenging to do so with precision beyond millisecond order. In the proposed detection method, the first step involves verifying the timestamp of a file suspected of tampering to determine its write time. Subsequently, the confirmed time is compared with the file size recorded within that time, taking into consideration the performance of the storage device. Finally, the total capacity of files written at a specific time is calculated, and this is compared with the maximum input and output performance of the storage device to detect any potential file tampering.

Determining Food Nutrition Information Preference Through Big Data Log Analysis (빅데이터 로그분석을 통한 식품영양정보 선호도 분석)

  • Hana Song;Hae-Jeung, Lee;Hunjoo Lee
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.5
    • /
    • pp.402-408
    • /
    • 2023
  • Consumer interest in food nutrition continues to grow; however, research on consumer preferences related to nutrition remains limited. In this study, big data analysis was conducted using keyword logs collected from the national information service, the Korean Food Composition Database (K-FCDB), to determine consumer preferences for foods of nutritional interest. The data collection period was set from January 2020 to December 2022, covering a total of 2,243,168 food name keywords searched by K-FCDB users. Food names were processed by merging them into representative food names. The search frequency of food names was analyzed for the entire period and by season using R. In the frequency analysis for the entire period, steamed rice, chicken, and egg were found to be the most frequently consumed foods by Koreans. Seasonal preference analysis revealed that in the spring and summer, foods without broth and cold dishes were consumed frequently, whereas in fall and winter, foods with broth and warm dishes were more popular. Additionally, foods sold by restaurants as seasonal items, such as Naengmyeon and Kongguksu, also exhibited seasonal variations in frequency. These results provide insights into consumer interest patterns in the nutritional information of commonly consumed foods and are expected to serve as fundamental data for formulating seasonal marketing strategies in the restaurant industry, given their indirect relevance to consumer trends.

Study on the Limitation of AVO Responses Shown in the Seismic Data from East-sea Gas Reservoir (동해 가스전 탄성파 자료에서 나타나는 AVO 반응의 한계점에 대한 고찰)

  • Shin, Seung-Il;Byun, Joong-Moo;Choi, Hyung-Wook;Kim, Kun-Deuk;Ko, Seung-Won;Seo, Young-Tak;Cha, Young-Ho
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.3
    • /
    • pp.242-249
    • /
    • 2008
  • Recently, AVO analysis has been widely used in oil exploration with seismic subsurface section as a direct indicator of the existence of the gas. In the case of the deep reservoirs like the gas reservoirs in the East-sea, it is often difficult to observe AVO responses in CMP gathers even though the bright spots are shown in the stacked section. Because the reservoir becomes more consolidated as its depth deepens, P-wave velocity does not decrease significantly when the pore fluid is replaced by the gas. Thus the difference in Poisson's ratio, which is a key factor for AVO response, between the reservoir and the layer above it does not increase significantly. In this study, we analyzed the effects of Poisson's ratio difference on AVO response with a variety of Poisson's ratios for the upper and lower layers. The results show that, as the difference in Poisson's ratio between the upper and lower layers decreases, the change in the reflection amplitude with incidence angle decreases and AVO responses become insignificant. To consider the limitation of AVO responses shown in the gas reservoir in East-sea, the velocity model was made by simulation Gorae V structure with seismic data and well logs. The results of comparing AVO responses observed from the synthetic data with theoretical AVO responses calculated by using material properties show that the amount of the change in reflection amplitude with increasing incident angle is very small when the difference in Poisson's ratio between the upper and lower layers is small. In addition, the characteristics of AVO responses were concealed by noise or amplitude distortion arisen during preprocessing. To overcome such limitations of AVO analysis of the data from deep reservoirs, we need to acquire precisely reflection amplltudes In data acquisition stage and use processing tools which preserve reflection amplitude in data processing stage.

Cultural characteristics on collected strains of Lentinula edodes and correlation with mycelial browning (표고 수집균주의 재배적 특성 및 갈변과의 상관관계)

  • Kim, Young-Ho;Jhune, Chang-Sung;Park, Soo-Cheol;You, Chang-Hyun;Sung, Jae-Mo;Kong, Won-Sik
    • Journal of Mushroom
    • /
    • v.9 no.4
    • /
    • pp.145-154
    • /
    • 2011
  • Shiitake mushroom (Lentinula edodes) is usually cultivated on the oak log. Log cultivation of this mushroom is getting difficult to get oak logs and has a weak point of its long cultivation period. Recently sawdust cultivation is getting increase. It is important to make mycelia browning on the substrate surface. This browned surface in sawdust cultivation plays an important role like as artificial bark of the oak log, which protects the other pests and suppresses water evaporation in the substrate. The period for mycelia browning is so long that the sawdust cultivation of Shiitake mushroom can not spread well into the mushroom farms. The development of methods for the rapid mycelia browning is quite required. In this article we would like to find cultural characteristics of collected strains and to see the correlation with mycelial browning. Mycelial growth in the media was different according to kinds of media and strains. The optimal temperature on mycelial growth was $20-25^{\circ}C$. Browning patterns of mycelium under 200 Lux seemed to be used for a key to differentiate the strains for sawdust cultivation. Browning period was 30-40 days in the agar media and 70-100 days in the sawdust bag cultivation. When we considered the productivity and the other characteristics, ASI 3046 is the best for the bag cultivation. Significance between mycelial growth and browning was not accepted, but that of mycelial growth between on PDA and sawdust was accepted. Browning period on the PDA and sawdust showed a strong relationships. These results suggested that the browning habits could not be depend on the difference of media, but on their own properties. To select the strain showed fast browning can be done by using agar media for saving time.

A Multimodal Profile Ensemble Approach to Development of Recommender Systems Using Big Data (빅데이터 기반 추천시스템 구현을 위한 다중 프로파일 앙상블 기법)

  • Kim, Minjeong;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.93-110
    • /
    • 2015
  • The recommender system is a system which recommends products to the customers who are likely to be interested in. Based on automated information filtering technology, various recommender systems have been developed. Collaborative filtering (CF), one of the most successful recommendation algorithms, has been applied in a number of different domains such as recommending Web pages, books, movies, music and products. But, it has been known that CF has a critical shortcoming. CF finds neighbors whose preferences are like those of the target customer and recommends products those customers have most liked. Thus, CF works properly only when there's a sufficient number of ratings on common product from customers. When there's a shortage of customer ratings, CF makes the formation of a neighborhood inaccurate, thereby resulting in poor recommendations. To improve the performance of CF based recommender systems, most of the related studies have been focused on the development of novel algorithms under the assumption of using a single profile, which is created from user's rating information for items, purchase transactions, or Web access logs. With the advent of big data, companies got to collect more data and to use a variety of information with big size. So, many companies recognize it very importantly to utilize big data because it makes companies to improve their competitiveness and to create new value. In particular, on the rise is the issue of utilizing personal big data in the recommender system. It is why personal big data facilitate more accurate identification of the preferences or behaviors of users. The proposed recommendation methodology is as follows: First, multimodal user profiles are created from personal big data in order to grasp the preferences and behavior of users from various viewpoints. We derive five user profiles based on the personal information such as rating, site preference, demographic, Internet usage, and topic in text. Next, the similarity between users is calculated based on the profiles and then neighbors of users are found from the results. One of three ensemble approaches is applied to calculate the similarity. Each ensemble approach uses the similarity of combined profile, the average similarity of each profile, and the weighted average similarity of each profile, respectively. Finally, the products that people among the neighborhood prefer most to are recommended to the target users. For the experiments, we used the demographic data and a very large volume of Web log transaction for 5,000 panel users of a company that is specialized to analyzing ranks of Web sites. R and SAS E-miner was used to implement the proposed recommender system and to conduct the topic analysis using the keyword search, respectively. To evaluate the recommendation performance, we used 60% of data for training and 40% of data for test. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. A widely used combination metric called F1 metric that gives equal weight to both recall and precision was employed for our evaluation. As the results of evaluation, the proposed methodology achieved the significant improvement over the single profile based CF algorithm. In particular, the ensemble approach using weighted average similarity shows the highest performance. That is, the rate of improvement in F1 is 16.9 percent for the ensemble approach using weighted average similarity and 8.1 percent for the ensemble approach using average similarity of each profile. From these results, we conclude that the multimodal profile ensemble approach is a viable solution to the problems encountered when there's a shortage of customer ratings. This study has significance in suggesting what kind of information could we use to create profile in the environment of big data and how could we combine and utilize them effectively. However, our methodology should be further studied to consider for its real-world application. We need to compare the differences in recommendation accuracy by applying the proposed method to different recommendation algorithms and then to identify which combination of them would show the best performance.