• Title/Summary/Keyword: large-scale systems

Search Result 1,879, Processing Time 0.03 seconds

A Survey on Egg Laying Performance and Distribution Status of Animal Welfare Certified Farms for Laying Hens (산란계 동물복지 인증 농가의 사육 및 유통 현황 조사)

  • Hong, Eui-Chul;Kang, Hwan-Ku;Park, Ki-Tae;Jeon, Jin-Joo;Kim, Hyun-Soo;Kim, Chan-Ho;Kim, Sang-Ho
    • Korean Journal of Poultry Science
    • /
    • v.46 no.2
    • /
    • pp.55-63
    • /
    • 2019
  • This study was conducted to evaluate animal welfare approved farms in three housing systems (open, windowless, and free-range). The survey was conducted in 25 animal welfare approved farms, and 10 farms were surveyed for distribution status. The main breed in all animal welfare approved farms of laying hens was Hy-Line Brown variety. In the case of open house, laying hens were bred in traditional and panel houses simultaneously; however, the ratio of panel house was 58.3%, which was higher than that of the traditional house. All the windowless houses were made of panels and more than 15,000 laying hens were housed in a single windowless house. In the case of free-range house, it was maintained on a small scale of less than 12,000 birds. Fifty-six percent of the surveyed farms were breeding at $7{\sim}8birds/m^2$. In terms of male and female ratios, most farms maintained 1 male:15 females, but there was a farmhouse that switched 17 or 20 females to 1 male. The daily dietary allowance was 110~170 g, and 32% of the surveyed farms provided feed of more than 150 g/day, which showed that forage feed was important. The age of at the first egg was 123 days, 122 days, and 120 days, and the peak percent was 91.8%, 94.9%, and 86.5% in open, windowless and free-range houses, respectively. The average egg production rate was 74.0%, 84.6%, and 72.7% in open, windowless, and free-range houses respectively, thus, there was no correlation between feed intake and hen-housed eggs. Distribution of welfare certified eggs was mainly a direct deal with the consumer or through contract production. The ratio of direct transactions between large-scale marts and eco-friendly specialty stores of welfare approved eggs was higher than that of conventional eggs. The rate of contract sales of eggs in both the barn and free-range systems was high, and the percentage of courier sales farms was also high. Excluding courier services, price of eggs in the barn system rose to more than 30 won/egg in the second half of 2017 (after AI). Price of eggs in the free-range system rose to more than 50 won/egg in the second half of 2017 (after AI). In the case of courier sales, the same price of 500 won was maintained before and after AI. In conclusion, the results of this study can be used as basic data for improving the animal welfare certification system for laying hens in Korea.

Keyword Network Analysis for Technology Forecasting (기술예측을 위한 특허 키워드 네트워크 분석)

  • Choi, Jin-Ho;Kim, Hee-Su;Im, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.227-240
    • /
    • 2011
  • New concepts and ideas often result from extensive recombination of existing concepts or ideas. Both researchers and developers build on existing concepts and ideas in published papers or registered patents to develop new theories and technologies that in turn serve as a basis for further development. As the importance of patent increases, so does that of patent analysis. Patent analysis is largely divided into network-based and keyword-based analyses. The former lacks its ability to analyze information technology in details while the letter is unable to identify the relationship between such technologies. In order to overcome the limitations of network-based and keyword-based analyses, this study, which blends those two methods, suggests the keyword network based analysis methodology. In this study, we collected significant technology information in each patent that is related to Light Emitting Diode (LED) through text mining, built a keyword network, and then executed a community network analysis on the collected data. The results of analysis are as the following. First, the patent keyword network indicated very low density and exceptionally high clustering coefficient. Technically, density is obtained by dividing the number of ties in a network by the number of all possible ties. The value ranges between 0 and 1, with higher values indicating denser networks and lower values indicating sparser networks. In real-world networks, the density varies depending on the size of a network; increasing the size of a network generally leads to a decrease in the density. The clustering coefficient is a network-level measure that illustrates the tendency of nodes to cluster in densely interconnected modules. This measure is to show the small-world property in which a network can be highly clustered even though it has a small average distance between nodes in spite of the large number of nodes. Therefore, high density in patent keyword network means that nodes in the patent keyword network are connected sporadically, and high clustering coefficient shows that nodes in the network are closely connected one another. Second, the cumulative degree distribution of the patent keyword network, as any other knowledge network like citation network or collaboration network, followed a clear power-law distribution. A well-known mechanism of this pattern is the preferential attachment mechanism, whereby a node with more links is likely to attain further new links in the evolution of the corresponding network. Unlike general normal distributions, the power-law distribution does not have a representative scale. This means that one cannot pick a representative or an average because there is always a considerable probability of finding much larger values. Networks with power-law distributions are therefore often referred to as scale-free networks. The presence of heavy-tailed scale-free distribution represents the fundamental signature of an emergent collective behavior of the actors who contribute to forming the network. In our context, the more frequently a patent keyword is used, the more often it is selected by researchers and is associated with other keywords or concepts to constitute and convey new patents or technologies. The evidence of power-law distribution implies that the preferential attachment mechanism suggests the origin of heavy-tailed distributions in a wide range of growing patent keyword network. Third, we found that among keywords that flew into a particular field, the vast majority of keywords with new links join existing keywords in the associated community in forming the concept of a new patent. This finding resulted in the same outcomes for both the short-term period (4-year) and long-term period (10-year) analyses. Furthermore, using the keyword combination information that was derived from the methodology suggested by our study enables one to forecast which concepts combine to form a new patent dimension and refer to those concepts when developing a new patent.

Modern Paper Quality Control

  • Olavi Komppa
    • Proceedings of the Korea Technical Association of the Pulp and Paper Industry Conference
    • /
    • 2000.06a
    • /
    • pp.16-23
    • /
    • 2000
  • The increasing functional needs of top-quality printing papers and packaging paperboards, and especially the rapid developments in electronic printing processes and various computer printers during past few years, set new targets and requirements for modern paper quality. Most of these paper grades of today have relatively high filler content, are moderately or heavily calendered , and have many coating layers for the best appearance and performance. In practice, this means that many of the traditional quality assurance methods, mostly designed to measure papers made of pure. native pulp only, can not reliably (or at all) be used to analyze or rank the quality of modern papers. Hence, introduction of new measurement techniques is necessary to assure and further develop the paper quality today and in the future. Paper formation , i.e. small scale (millimeter scale) variation of basis weight, is the most important quality parameter of paper-making due to its influence on practically all the other quality properties of paper. The ideal paper would be completely uniform so that the basis weight of each small point (area) measured would be the same. In practice, of course, this is not possible because there always exists relatively large local variations in paper. However, these small scale basis weight variations are the major reason for many other quality problems, including calender blacking uneven coating result, uneven printing result, etc. The traditionally used visual inspection or optical measurement of the paper does not give us a reliable understanding of the material variations in the paper because in modern paper making process the optical behavior of paper is strongly affected by using e.g. fillers, dye or coating colors. Futhermore, the opacity (optical density) of the paper is changed at different process stages like wet pressing and calendering. The greatest advantage of using beta transmission method to measure paper formation is that it can be very reliably calibrated to measure true basis weight variation of all kinds of paper and board, independently on sample basis weight or paper grade. This gives us the possibility to measure, compare and judge papers made of different raw materials, different color, or even to measure heavily calendered, coated or printed papers. Scientific research of paper physics has shown that the orientation of the top layer (paper surface) fibers of the sheet paly the key role in paper curling and cockling , causing the typical practical problems (paper jam) with modern fax and copy machines, electronic printing , etc. On the other hand, the fiber orientation at the surface and middle layer of the sheet controls the bending stiffness of paperboard . Therefore, a reliable measurement of paper surface fiber orientation gives us a magnificent tool to investigate and predict paper curling and coclking tendency, and provides the necessary information to finetune, the manufacturing process for optimum quality. many papers, especially heavily calendered and coated grades, do resist liquid and gas penetration very much, bing beyond the measurement range of the traditional instruments or resulting invonveniently long measuring time per sample . The increased surface hardness and use of filler minerals and mechanical pulp make a reliable, nonleaking sample contact to the measurement head a challenge of its own. Paper surface coating causes, as expected, a layer which has completely different permeability characteristics compared to the other layer of the sheet. The latest developments in sensor technologies have made it possible to reliably measure gas flow in well controlled conditions, allowing us to investigate the gas penetration of open structures, such as cigarette paper, tissue or sack paper, and in the low permeability range analyze even fully greaseproof papers, silicon papers, heavily coated papers and boards or even detect defects in barrier coatings ! Even nitrogen or helium may be used as the gas, giving us completely new possibilities to rank the products or to find correlation to critical process or converting parameters. All the modern paper machines include many on-line measuring instruments which are used to give the necessary information for automatic process control systems. hence, the reliability of this information obtained from different sensors is vital for good optimizing and process stability. If any of these on-line sensors do not operate perfectly ass planned (having even small measurement error or malfunction ), the process control will set the machine to operate away from the optimum , resulting loss of profit or eventual problems in quality or runnability. To assure optimum operation of the paper machines, a novel quality assurance policy for the on-line measurements has been developed, including control procedures utilizing traceable, accredited standards for the best reliability and performance.

A Study on Risk Factor Identification by Specialty Construction Industry Sector through Construction Accident Cases : Focused on the Insurance Data of Specialty Construction Worker (건설재해사례 분석에 의한 전문건설업종별 위험요인 탐색 : 전문건설업 근로자 공제자료를 중심으로)

  • Lee, Young Jai;Kang, Seong Kyung;Yu, Hwan
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.24 no.1
    • /
    • pp.45-63
    • /
    • 2019
  • The number of domestic construction company is expanding every year while the construction workers' exposure to disaster risk is increasing due to technological advancements and popularity of high-rise buildings. In particular, the industry faces greater fatalities and severe large scale accidents because of construction industry characteristics including influx of foreign workers with different language and culture, large number of aged workers, outsourcing, high place work, heavy machine construction. The construction industry is labor-intensive, which is to be completed under given timeline and consists of unique working environment with a lot of night shifts. In addition, when a fixed construction budget is not secured, there is less investment in safety management resulting in poor risk management at the construction site. Taking account that the construction industry has higher accident risk rate and fatality rate, risky and unique working environment, and various labor pool from foreign to aged workers, preemptive safety management through risk factor identification is a mandatory requirement for the construction industry and site. The study analyzes about 8,500 cases of construction accidents that occurred over the past 10 years and identified risk factor by construction industry sector to secure a systematic insight for risk management. Based on interrelation analysis between accident types, work types, original cause materials and assailing materials, there is correlation between each analysis factor and work industry. Especially for work types, there is great correlation between work tasks and industry type. For reinforced concrete and earthwork are among the most frequent types of accidents, and they are not only high in frequency of accidents, but also have a high risk in categories of occurrence.

Development of Intelligent ATP System Using Genetic Algorithm (유전 알고리듬을 적용한 지능형 ATP 시스템 개발)

  • Kim, Tai-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.131-145
    • /
    • 2010
  • The framework for making a coordinated decision for large-scale facilities has become an important issue in supply chain(SC) management research. The competitive business environment requires companies to continuously search for the ways to achieve high efficiency and lower operational costs. In the areas of production/distribution planning, many researchers and practitioners have developedand evaluated the deterministic models to coordinate important and interrelated logistic decisions such as capacity management, inventory allocation, and vehicle routing. They initially have investigated the various process of SC separately and later become more interested in such problems encompassing the whole SC system. The accurate quotation of ATP(Available-To-Promise) plays a very important role in enhancing customer satisfaction and fill rate maximization. The complexity for intelligent manufacturing system, which includes all the linkages among procurement, production, and distribution, makes the accurate quotation of ATP be a quite difficult job. In addition to, many researchers assumed ATP model with integer time. However, in industry practices, integer times are very rare and the model developed using integer times is therefore approximating the real system. Various alternative models for an ATP system with time lags have been developed and evaluated. In most cases, these models have assumed that the time lags are integer multiples of a unit time grid. However, integer time lags are very rare in practices, and therefore models developed using integer time lags only approximate real systems. The differences occurring by this approximation frequently result in significant accuracy degradations. To introduce the ATP model with time lags, we first introduce the dynamic production function. Hackman and Leachman's dynamic production function in initiated research directly related to the topic of this paper. They propose a modeling framework for a system with non-integer time lags and show how to apply the framework to a variety of systems including continues time series, manufacturing resource planning and critical path method. Their formulation requires no additional variables or constraints and is capable of representing real world systems more accurately. Previously, to cope with non-integer time lags, they usually model a concerned system either by rounding lags to the nearest integers or by subdividing the time grid to make the lags become integer multiples of the grid. But each approach has a critical weakness: the first approach underestimates, potentially leading to infeasibilities or overestimates lead times, potentially resulting in excessive work-inprocesses. The second approach drastically inflates the problem size. We consider an optimized ATP system with non-integer time lag in supply chain management. We focus on a worldwide headquarter, distribution centers, and manufacturing facilities are globally networked. We develop a mixed integer programming(MIP) model for ATP process, which has the definition of required data flow. The illustrative ATP module shows the proposed system is largely affected inSCM. The system we are concerned is composed of a multiple production facility with multiple products, multiple distribution centers and multiple customers. For the system, we consider an ATP scheduling and capacity allocationproblem. In this study, we proposed the model for the ATP system in SCM using the dynamic production function considering the non-integer time lags. The model is developed under the framework suitable for the non-integer lags and, therefore, is more accurate than the models we usually encounter. We developed intelligent ATP System for this model using genetic algorithm. We focus on a capacitated production planning and capacity allocation problem, develop a mixed integer programming model, and propose an efficient heuristic procedure using an evolutionary system to solve it efficiently. This method makes it possible for the population to reach the approximate solution easily. Moreover, we designed and utilized a representation scheme that allows the proposed models to represent real variables. The proposed regeneration procedures, which evaluate each infeasible chromosome, makes the solutions converge to the optimum quickly.

Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)

  • Seo, Yian;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.1-19
    • /
    • 2018
  • Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.

Fiber Optic Sensors for Smart Monitoring (스마트 모니터링용 광섬유센서)

  • Kim, Ki-Soo
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.10 no.6 s.52
    • /
    • pp.137-145
    • /
    • 2006
  • Recently, the interests in structural monitoring of civil infrastructures are increased. Especially, as the civil infrastructures such as bridges, tunnels and buildings become large-scale, it is necessary to monitor and maintain the safety state of the structures, which requires smart systems that can supply long-term monitoring during the service time of the structures. In this paper, we investigated the possibilities of fiber optic sensor application to the various structures. We investigate the possibility of using fiber optic Bragg grating sensors to joint structure. The sensors show good response to the structural behavior of the joint while electric gauges lack of sensitivity, durability and long term stability for continuous monitoring. We also apply fiber optic structural monitoring to the composite repaired concrete beam structure. Peel-out effects is detected with optical fiber Bragg grating sensors and the strain difference between main structure and repaired carbon sheets is observed when they separate each other. The real field test was performed to verify the behaviors of fiber Bragg grating sensors attached to the containment structure in Uljin nuclear power plant in Korea as a part of structural integrity test which demonstrates that the structural response of the non-prototype primary containment structures. The optical fiber Bragg grating sensor smart system which is the probable means for long term assessments can be applicable to monitoring of structural members in various civil infrastructures.

Reverse Osmosis Treatment of Swine Wastewater with Various Pretreatment Systems (축산 폐수의 전처리 방법과 역삼투압 처리)

  • Park, Soon Ju;Kim, Moon Il;Kim, Do Yun;Chang, Ho Nam;Chang, Seung Teak
    • Clean Technology
    • /
    • v.9 no.2
    • /
    • pp.49-55
    • /
    • 2003
  • The generation of livestock wastewater in Korea amounts to $130,000m^3/day$, 0.43% of the total waste water volume, but which corresponds to 8.6% of total BOD loading. Furthermore this wastewater contains a large amount of nitrogen and phosphorus that are major causes of eutrophication in rivers and lakes. The average volume of livestock wastewater in a Korea's single farm is only $2.5m^3/day$, which necessitates development of a simple and economical process for the removal of nitrogen and phosphorus. Introduction of filtration method removes more than 90% of suspended solids. Subsequent application of reverse osmosis removes more then 95% of total nitrogen and phosphorus in the wastewater. The effluent of this treatment will yield less than 200 mg/L of total nitrogen and 1 mg/L of total phosphorous, which are lower than 260 mg/L of total N and 50 mg/L of total P, the regulation values of Ministry of Environment, Korea. Treating $2m^3/day$ of livestock wastewater was found to be feasible with the application of filtration and reverse osmosis and the electricity requirement was estimated to be about 30 Kwh/month.

  • PDF

Study of the Construction of a Coastal Disaster Prevention System using Deep Learning (딥러닝을 이용한 연안방재 시스템 구축에 관한 연구)

  • Kim, Yeon-Joong;Kim, Tae-Woo;Yoon, Jong-Sung;Kim, Myong-Kyu
    • Journal of Ocean Engineering and Technology
    • /
    • v.33 no.6
    • /
    • pp.590-596
    • /
    • 2019
  • Numerous deaths and substantial property damage have occurred recently due to frequent disasters of the highest intensity according to the abnormal climate, which is caused by various problems, such as global warming, all over the world. Such large-scale disasters have become an international issue and have made people aware of the disasters so they can implement disaster-prevention measures. Extensive information on disaster prevention actively has been announced publicly to support the natural disaster reduction measures throughout the world. In Japan, diverse developmental studies on disaster prevention systems, which support hazard map development and flood control activity, have been conducted vigorously to estimate external forces according to design frequencies as well as expected maximum frequencies from a variety of areas, such as rivers, coasts, and ports based on broad disaster prevention data obtained from several huge disasters. However, the current reduction measures alone are not sufficiently effective due to the change of the paradigms of the current disasters. Therefore, in order to obtain the synergy effect of reduction measures, a study of the establishment of an integrated system is required to improve the various disaster prevention technologies and the current disaster prevention system. In order to develop a similar typhoon search system and establish a disaster prevention infrastructure, in this study, techniques will be developed that can be used to forecast typhoons before they strike by using artificial intelligence (AI) technology and offer primary disaster prevention information according to the direction of the typhoon. The main function of this model is to predict the most similar typhoon among the existing typhoons by utilizing the major typhoon information, such as course, central pressure, and speed, before the typhoon directly impacts South Korea. This model is equipped with a combination of AI and DNN forecasts of typhoons that change from moment to moment in order to efficiently forecast a current typhoon based on similar typhoons in the past. Thus, the result of a similar typhoon search showed that the quality of prediction was higher with the grid size of one degree rather than two degrees in latitude and longitude.

Crashworthiness Study of Sliding Post Using Full Scale Crash Test Data (충돌실험 데이터를 이용한 슬라이딩 지주구조의 감충성능 분석)

  • Jang, Dae-Young;Lee, Sung-Soo;Kim, Kee-Dong;Sung, Jung-Gon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.1
    • /
    • pp.1-11
    • /
    • 2020
  • Medium to large post structures installed along the roadside without proper protection can lead to serious vehicle damage and occupant injury at the impact. In North America and Europe, splitting systems such as slip base or breakaway device are used to reduce impacts. But the system has the risk of secondary accident when the splitted post falls down to the traffic or pedestrian. Sliding Post have been proposed as a way to solve this problem. By studying the crash test results of the 1.3ton and 0.9ton vehicle with 60 km/h and 80 km/h to a Rigidly Fixed Post (RFP) and Sliding Post (SP), danger of the conventional RFP and crashworthiness of the SP have been proven. While collision analysis only from the acceleration measured at the center of the vehicle assumes the motion of the post is the same as that of the vehicle, in this paper, by adding high speed film data to the analysis with vehicle acceleration could have separate the post motion from the vehicle motion. It gives better explanations on the movement of post and vehicle in each distinctive time step and provides basics to the crashworthy post design.