• Title/Summary/Keyword: power system monitoring

Search Result 1,936, Processing Time 0.028 seconds

Analysis of the operation status and opinion on the improvement of fishing vessel structure in coastal improved stow net fishery by the questionnaire survey (설문조사를 통한 연안개량안강망어업의 조업 실태 및 어선 구조 개선에 관한 의견 분석)

  • CHANG, Ho-Young;KIM, Min-Son;HWANG, Bo-Kyu;OH, Jong Chul
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.57 no.4
    • /
    • pp.316-333
    • /
    • 2021
  • In order to understand basic data for improving the fishing system and fishing vessel structure in coastal improved stow net fishery, a questionnaire survey and on-site hearing were conducted from May 10 to June 11, 2019 to analyze opinions on the improvement of operation status and fishing vessel structure. The questionnaire survey consisted of ten questions on the operation status of coastal improved stow net fishery and six questions on the improvement of fishing vessel structure, and the results of each question were analyzed by the region, the captain's age, the captain's career and the age of fishing vessel. As a result of analyzing opinions on the operation status of the coastal improved stow net fishery, it was found that the average time required for casting net was 32.8 to 33.0 minutes and that the average time required for hauling net was 41.0 to 42.2 minutes which took 10 to 12 minutes more than for casting net. The most important work requiring improvement during fishing operation (the first priority) were 'hauling net operation,' 'readjustment and storage of fishing gear,' and 'fish handling' and the hardest factor in fishing management were in the order of 'reduction of catch,' 'labor shortage' and 'rising labor costs.' The most institutional improvement that is most needed in coastal improved stow net fishery was an 'using fine mesh nets.' Most of the respondent to the questions on the experience in hiring foreign crews was 'either hiring or willing to hire foreign crews,' and the average number of foreign crews employed was found to be 2.3 to 2.4 persons. The most important reason for hiring (or considering employment) foreign crews was 'high labor costs.' The degree of communication with foreign crews during fishing operation were 'moderate' or 'difficult to direct work.' The most important problem in hiring foreign crews (the first priority) was an 'illegal departure.' As the survey results on the opinion of structural improvement of coastal improved stow net fishing vessel, the degree of satisfaction with fishing vessel structure related to fishing operation was found to be somewhat unsatisfactory, with an average of 3.3 points on a five-point scale. The inconvenient structure of fishing vessel in possession (the first priority), the space needed most for the construction of new fishing vessel (the first priority) and the space considered important for the construction of new fishing vessel (the first prioprity) was a 'fish warehouse.' The most preferred equipment for the construction of new fishing vessel were 'engine operation monitoring' and 'navigation safety devices.' The average size (tonnage class), the average horse power and the average total length of fishing vessel for proper profit and safety fishing operation was between 13.8 and 14.0 tonnes, 808.3 to 819.5 H.P. and 23.4 to 23.5 meters, respectively. The results of the operation status of coastal improved stow net fishery and the requirement for improving the fishing vessel structure are expected to be provided as basic data for reference when we build or improve the fishing vessel.

Earthquake Monitoring : Future Strategy (지진관측 : 미래 발전 전략)

  • Chi, Heon-Cheol;Park, Jung-Ho;Kim, Geun-Young;Shin, Jin-Soo;Shin, In-Cheul;Lim, In-Seub;Jeong, Byung-Sun;Sheen, Dong-Hoon
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.3
    • /
    • pp.268-276
    • /
    • 2010
  • Earthquake Hazard Mitigation Law was activated into force on March 2009. By the law, the obligation to monitor the effect of earthquake on the facilities was extended to many organizations such as gas company and local governments. Based on the estimation of National Emergency Management Agency (NEMA), the number of free-surface acceleration stations would be expanded to more than 400. The advent of internet protocol and the more simplified operation have allowed the quick and easy installation of seismic stations. In addition, the dynamic range of seismic instruments has been continuously improved enough to evaluate damage intensity and to alert alarm directly for earthquake hazard mitigation. For direct visualization of damage intensity and area, Real Time Intensity COlor Mapping (RTICOM) is explained in detail. RTICOM would be used to retrieve the essential information for damage evaluation, Peak Ground Acceleration (PGA). Destructive earthquake damage is usually due to surface waves which just follow S wave. The peak amplitude of surface wave would be pre-estimated from the amplitude and frequency content of first arrival P wave. Earthquake Early Warning (EEW) system is conventionally defined to estimate local magnitude from P wave. The status of EEW is reviewed and the application of EEW to Odesan earthquake is exampled with ShakeMap in order to make clear its appearance. In the sense of rapidity, the earthquake announcement of Korea Meteorological Agency (KMA) might be dramatically improved by the adaption of EEW. In order to realize hazard mitigation, EEW should be applied to the local crucial facilities such as nuclear power plants and fragile semi-conduct plant. The distributed EEW is introduced with the application example of Uljin earthquake. Not only Nation-wide but also locally distributed EEW applications, all relevant information is needed to be shared in real time. The plan of extension of Korea Integrated Seismic System (KISS) is briefly explained in order to future cooperation of data sharing and utilization.

A Study on the establishment of IoT management process in terms of business according to Paradigm Shift (패러다임 전환에 의한 기업 측면의 IoT 경영 프로세스 구축방안 연구)

  • Jeong, Min-Eui;Yu, Song-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.151-171
    • /
    • 2015
  • This study examined the concepts of the Internet of Things(IoT), the major issue and IoT trend in the domestic and international market. also reviewed the advent of IoT era which caused a 'Paradigm Shift'. This study proposed a solution for the appropriate corresponding strategy in terms of Enterprise. Global competition began in the IoT market. So, Businesses to be competitive and responsive, the government's efforts, as well as the efforts of companies themselves is needed. In particular, in order to cope with the dynamic environment appropriately, faster and more efficient strategy is required. In other words, proposed a management strategy that can respond the IoT competitive era on tipping point through the vision of paradigm shift. We forecasted and proposed the emergence of paradigm shift through a comparative analysis of past management paradigm and IoT management paradigm as follow; I) Knowledge & learning oriented management, II) Technology & innovation oriented management, III) Demand driven management, IV) Global collaboration management. The Knowledge & learning oriented management paradigm is expected to be a new management paradigm due to the development of IT technology development and information processing technology. In addition to the rapid development such as IT infrastructure and processing of data, storage, knowledge sharing and learning has become more important. Currently Hardware-oriented management paradigm will be changed to the software-oriented paradigm. In particular, the software and platform market is a key component of the IoT ecosystem, has been estimated to be led by Technology & innovation oriented management. In 2011, Gartner announced the concept of "Demand-Driven Value Networks(DDVN)", DDVN emphasizes value of the whole of the network. Therefore, Demand driven management paradigm is creating demand for advanced process, not the process corresponding to the demand simply. Global collaboration management paradigm create the value creation through the fusion between technology, between countries, between industries. In particular, cooperation between enterprises that has financial resources and brand power and venture companies with creative ideas and technical will generate positive synergies. Through this, The large enterprises and small companies that can be win-win environment would be built. Cope with the a paradigm shift and to establish a management strategy of Enterprise process, this study utilized the 'RTE cyclone model' which proposed by Gartner. RTE concept consists of three stages, Lead, Operate, Manage. The Lead stage is utilizing capital to strengthen the business competitiveness. This stages has the goal of linking to external stimuli strategy development, also Execute the business strategy of the company for capital and investment activities and environmental changes. Manege stage is to respond appropriately to threats and internalize the goals of the enterprise. Operate stage proceeds to action for increasing the efficiency of the services across the enterprise, also achieve the integration and simplification of the process, with real-time data capture. RTE(Real Time Enterprise) concept has the value for practical use with the management strategy. Appropriately applied in this study, we propose a 'IoT-RTE Cyclone model' which emphasizes the agility of the enterprise. In addition, based on the real-time monitoring, analysis, act through IT and IoT technology. 'IoT-RTE Cyclone model' that could integrate the business processes of the enterprise each sector and support the overall service. therefore the model be used as an effective response strategy for Enterprise. In particular, IoT-RTE Cyclone Model is to respond to external events, waste elements are removed according to the process is repeated. Therefore, it is possible to model the operation of the process more efficient and agile. This IoT-RTE Cyclone Model can be used as an effective response strategy of the enterprise in terms of IoT era of rapidly changing because it supports the overall service of the enterprise. When this model leverages a collaborative system among enterprises it expects breakthrough cost savings through competitiveness, global lead time, minimizing duplication.

Monitoring soybean growth using L, C, and X-bands automatic radar scatterometer measurement system (L, C, X-밴드 레이더 산란계 자동측정시스템을 이용한 콩 생육 모니터링)

  • Kim, Yi-Hyun;Hong, Suk-Young;Lee, Hoon-Yol;Lee, Jae-Eun
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.2
    • /
    • pp.191-201
    • /
    • 2011
  • Soybean has widely grown for its edible bean which has numerous uses. Microwave remote sensing has a great potential over the conventional remote sensing with the visible and infrared spectra due to its all-weather day-and-night imaging capabilities. In this investigation, a ground-based polarimetric scatterometer operating at multiple frequencies was used to continuously monitor the crop conditions of a soybean field. Polarimetric backscatter data at L, C, and X-bands were acquired every 10 minutes on the microwave observations at various soybean stages. The polarimetric scatterometer consists of a vector network analyzer, a microwave switch, radio frequency cables, power unit and a personal computer. The polarimetric scatterometer components were installed inside an air-conditioned shelter to maintain constant temperature and humidity during the data acquisition period. The backscattering coefficients were calculated from the measured data at incidence angle $40^{\circ}$ and full polarization (HH, VV, HV, VH) by applying the radar equation. The soybean growth data such as leaf area index (LAI), plant height, fresh and dry weight, vegetation water content and pod weight were measured periodically throughout the growth season. We measured the temporal variations of backscattering coefficients of the soybean crop at L, C, and X-bands during a soybean growth period. In the three bands, VV-polarized backscattering coefficients were higher than HH-polarized backscattering coefficients until mid-June, and thereafter HH-polarized backscattering coefficients were higher than VV-, HV-polarized back scattering coefficients. However, the cross-over stage (HH > VV) was different for each frequency: DOY 200 for L-band and DOY 210 for both C and X-bands. The temporal trend of the backscattering coefficients for all bands agreed with the soybean growth data such as LAI, dry weight and plant height; i.e., increased until about DOY 271 and decreased afterward. We plotted the relationship between the backscattering coefficients with three bands and soybean growth parameters. The growth parameters were highly correlated with HH-polarization at L-band (over r=0.92).

How effective has the Wairau River erodible embankment been in removing sediment from the Lower Wairau River?

  • Kyle, Christensen
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.237-237
    • /
    • 2015
  • The district of Marlborough has had more than its share of river management projects over the past 150 years, each one uniquely affecting the geomorphology and flood hazard of the Wairau Plains. A major early project was to block the Opawa distributary channel at Conders Bend. The Opawa distributary channel took a third and more of Wairau River floodwaters and was a major increasing threat to Blenheim. The blocking of the Opawa required the Wairau and Lower Wairau rivers to carry greater flood flows more often. Consequently the Lower Wairau River was breaking out of its stopbanks approximately every seven years. The idea of diverting flood waters at Tuamarina by providing a direct diversion to the sea through the beach ridges was conceptualised back around the 1920s however, limits on resources and machinery meant the mission of excavating this diversion didn't become feasible until the 1960s. In 1964 a 10 m wide pilot channel was cut from the sea to Tuamarina with an initial capacity of $700m^3/s$. It was expected that floods would eventually scour this 'Wairau Diversion' to its design channel width of 150 m. This did take many more years than initially thought but after approximately 50 years with a little mechanical assistance the Wairau Diversion reached an adequate capacity. Using the power of the river to erode the channel out to its design width and depth was a brilliant idea that saved many thousands of dollars in construction costs and it is somewhat ironic that it is that very same concept that is now being used to deal with the aggradation problem that the Wairau Diversion has caused. The introduction of the Wairau Diversion did provide some flood relief to the lower reaches of the river but unfortunately as the Diversion channel was eroding and enlarging the Lower Wairau River was aggrading and reducing in capacity due to its inability to pass its sediment load with reduced flood flows. It is estimated that approximately $2,000,000m^3$ of sediment was deposited on the bed of the Lower Wairau River in the time between the Diversion's introduction in 1964 and 2010, raising the Lower Wairau's bed upwards of 1.5m in some locations. A numerical morphological model (MIKE-11 ST) was used to assess a number of options which led to the decision and resource consent to construct an erodible (fuse plug) bank at the head of the Wairau Diversion to divert more frequent scouring-flows ($+400m^3/s$)down the Lower Wairau River. Full control gates were ruled out on the grounds of expense. The initial construction of the erodible bank followed in late 2009 with the bank's level at the fuse location set to overtop and begin washing out at a combined Wairau flow of $1,400m^3/s$ which avoids berm flooding in the Lower Wairau. In the three years since the erodible bank was first constructed the Wairau River has sustained 14 events with recorded flows at Tuamarina above $1,000m^3/s$ and three of events in excess of $2,500m^3/s$. These freshes and floods have resulted in washout and rebuild of the erodible bank eight times with a combined rebuild expenditure of $80,000. Marlborough District Council's Rivers & Drainage Department maintains a regular monitoring program for the bed of the Lower Wairau River, which consists of recurrently surveying a series of standard cross sections and estimating the mean bed level (MBL) at each section as well as an overall MBL change over time. A survey was carried out just prior to the installation of the erodible bank and another survey was carried out earlier this year. The results from this latest survey show for the first time since construction of the Wairau Diversion the Lower Wairau River is enlarging. It is estimated that the entire bed of the Lower Wairau has eroded down by an overall average of 60 mm since the introduction of the erodible bank which equates to a total volume of $260,000m^3$. At a cost of $$0.30/m^3$ this represents excellent value compared to mechanical dredging which would likely be in excess of $$10/m^3$. This confirms that the idea of using the river to enlarge the channel is again working for the Wairau River system and that in time nature's "excavator" will provide a channel capacity that will continue to meet design requirements.

  • PDF

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.