• Title/Summary/Keyword: Level Set method

Search Result 1,489, Processing Time 0.032 seconds

High-level Expression and Characterization of the Human Interleukin-10 in the Milk of Transgenic Mice

  • Zneng, Z. Y.;B. H. Sohn;K. B. Oh;W. J. Shin;Y. M. Han;Lee, K. K.
    • Proceedings of the KSAR Conference
    • /
    • 2003.06a
    • /
    • pp.46-46
    • /
    • 2003
  • Interleukin-10 (IL-10) is a homodimeric protein with a wide spectrum of anti-inflammatory and immune activities. It inhibits cytokine production and expression of immune surface molecules in various cell types. The transgenic mice carrying the human IL-10 gene in conjunction with the bovine $\beta$-casein promoter produced the human IL-10 in milk during lactation. Transgenic mice were generated using a standard method as described previously. To screen transgenic mice, PCR was carried out using chromosomal DNA extracted from tail or toe tissues with a primer set. In this study, stability of germ line transmission and expression of IL-10 gene integrated into host chromosome were monitored up to generation F15 of a transgenic line. When female mouse of generation F9 was crossbred with normal male, generation F9 to F15 mice showed similar transmission rates (66.0$\pm$20.13%, 61.5$\pm$16.66%, 41.1$\pm$8.40%, 40.7$\pm$20.34%, 61.3$\pm$10.75%, 49.2$\pm$18.82%, and 43.8$\pm$25.91%, respectively), implying that the IL-10 gene can be transmitted stably up to long term generation in the transgenic mice. For ELISA analysis, IL-10 expression levels were determined with an hIL-10 ELISA and a mIL-10 ELISA kit in accordance with the supplier's protocol. Expression levels of human IL-10 from milk of generation F9 to F13 mice were 3.6$\pm$1.20 mg/ml, 4.2$\pm$0.93 mg/ml, 5.7$\pm$1.46 mg/ml, 6.3$\pm$3.46 mg/ml, and 6.8$\pm$4.52 mg/ml, respectively. These expression levels are higher than in generation F1 (1.6 mg/ml) mice. We concluded that transgenic mice faithfully passed the transgene on their progeny and successively secreted target proteins into their milk through several generations, although there was a little fluctuation in the transmission frequency and expression level between the generations.

  • PDF

Selection of Evaluation Metrics for Grading Autonomous Driving Car Judgment Abilities Based on Driving Simulator (드라이빙 시뮬레이터 기반 자율주행차 판단능력 등급화를 위한 평가지표 선정)

  • Oh, Min Jong;Jin, Eun Ju;Han, Mi Seon;Park, Je Jin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.63-73
    • /
    • 2024
  • Autonomous vehicles at Levels 3 to 5, currently under global research and development, seek to replace the driver's perception, judgment, and control processes with various sensors integrated into the vehicle. This integration enables artificial intelligence to autonomously perform the majority of driving tasks. However, autonomous vehicles currently obtain temporary driving permits, allowing them to operate on roads if they meet minimum criteria for autonomous judgment abilities set by individual countries. When autonomous vehicles become more widespread in the future, it is anticipated that buyers may not have high confidence in the ability of these vehicles to avoid hazardous situations due to the limitations of temporary driving permits. In this study, we propose a method for grading the judgment abilities of autonomous vehicles based on a driving simulator experiment comparing and evaluating drivers' abilities to avoid hazardous situations. The goal is to derive evaluation criteria that allow for grading based on specific scenarios and to propose a framework for grading autonomous vehicles. Thirty adults (25 males and 5 females) participated in the driving simulator experiment. The analysis of the experimental results involved K-means cluster analysis and independent sample t-tests, confirming the possibility of classifying the judgment abilities of autonomous vehicles and the statistical significance of such classifications. Enhancing confidence in the risk-avoidance capabilities of autonomous vehicles in future hazardous situations could be a significant contribution of this research.

ESG Activities and Costs of Debt Capital of Shipping Companies (해운기업의 ESG 활동과 타인자본비용)

  • Soon-Wook Hong
    • Journal of Navigation and Port Research
    • /
    • v.48 no.3
    • /
    • pp.200-205
    • /
    • 2024
  • This paper examines the impact of ESG activities of domestic shipping companies on the cost of debt. It is known that companies with large information asymmetry tend to have high costs of debt. Corporate ESG activities have been identified as an effective means of reducing information asymmetry. By actively engaging in ESG activities, companies can lower the cost of debt by reducing information asymmetry. Therefore, this study aims to investigate whether these mechanisms, which have been observed in previous studies, also apply to domestic shipping companies. Multiple regression analysis is conducted on KOSP I-listed shipping companies from2010 to 2022. The cost of debt is set as the dependent variable, while the ESG rating is used as the explanatory variable. The analysis reveals that companies with a high level of ESG activities generally have a lower cost of debt. However, it is important to note that ESG activities of shipping companies do not seem to have a significant impact on their cost of debt. In fact, the level of ESG activities among domestic shipping companies is not particularly high (Hong, 2024). Despite these findings, domestic shipping companies should still strive for sustainable management to adapt to the rapidly changing business environment and meet the demands of the modern era. ESG management is a representative method for achieving sustainability. Therefore, shipping companies should not only focus on reducing the cost of debt but also on opening up the closed industry culture and communicating with capital market participants for sustainable growth. It is crucial for these companies to listen to the voices of stakeholders and embrace a holistic approach to sustainability.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

An Operations Study on a Home Health Nursing Demonstration Program for the Patients Discharged with Chronic Residual Health Care Problems (추후관리가 필요한 만성질환 퇴원환자 가정간호 시범사업 운영 연구)

  • 홍여신;이은옥;이소우;김매자;홍경자;서문자;이영자;박정호;송미순
    • Journal of Korean Academy of Nursing
    • /
    • v.20 no.2
    • /
    • pp.227-248
    • /
    • 1990
  • The study was conceived in relation to a concern over the growing gap between the needs of chronic patients and the availability of care from the current health care system in Korea. Patients with agonizing chronic pain, discomfort, despair and disability are left with helplessly unprepared families with little help from the acute care oriented health care system after discharge from hospital. There is a great need for the development of an alternative means of quality care that is economically feasible and culturally adaptible to our society. Thus, the study was designed to demonstrate the effectiveness of home heath care as an alternative to bridge the existing gap between the patients' needs and the current practice of health care. The study specifically purports to test the effects of home care on health expenditure, readmission, job retention, compliance to health care regime, general conditions, complications, and self-care knowledge and practices. The study was guided by the operations research method advocated by the Primary Health Care Operations Research Institute(PRICOR) which constitutes 3 stages of research : namely, problem analysis solution development, and solution validation. The first step in the operations research was field preparation to develop the necessary consensus and cooperation. This was done through the formation of a consulting body at the hospital and a steering committee among the researchers. For the stage of problem analysis, the Annual Report of Seoul National University Hospital and the patients records for last 5 years were reviewed and selective patient interviews were conducted to find out the magnitude of chronic health problems and areas of unmect health care needs to finally decide on the kinds of health problems to study. On the basis of problem analysis, the solution development stage was devoted to home care program development asa solution alternative. Assessment tools, teaching guidelines and care protocols were developed and tested for their validity. The final stage was the stage of experimentation and evaluation. Patients with liver diseases, hemiplegic and diabetic conditions were selected as study samples. Discharge evaluation, follow up home care, measurement and evaluation were carried out according to the protocols of care and measurement plan for each patient for the period of 6 months after discharge. The study was carried out for the period from Jan. 1987 to Dec. 1989. The following are the results of the study presented according to the hypotheses set forth for the study ; 1. Total expenditures for the period of study were not reduced for the experimental group, however, since the cost per hospital visit is about 4 times as great as the cost per home visit, the effect of cost saving by home care will become a reality as home care replaces part of the hospital visits. 2. The effect on the rate of readmission and job retention was found to be statistically nonsignificant though the number of readmission was less among the experimental group receiving home care. 3. The effect on compliance to the health care regime was found to be statistically significant at the 5% level for hepatopathic and diabetic patients. 4. Education on diet, rest and excise, and medication through home care had an effect on improved liver function test scores, prevention of complications and self - care knowledge in hepatopathic patients at a statistically significant level. 5. In hemiplegic patient, home care had an effect on increased grasping power at a significant level. However. there was no significant difference between the experimental and control groups in the level of compliane, prevention of complications or in self-care practices. 6. In diabetic patients, there was no difference between the experimental and control groups in scores of laboratory tests, appearance of complications, and self-care knowledge or self -care practices. The above findings indicate that a home care program instituted for such short term as 6 months period could not totally demonstrate its effectiveness at a statistically significant level by quantitative analysis however, what was shown in part in this analysis, and in the continuous consultation sought by those who had been in the experimental group, is that home health care has a great potential in retarding or preventing pathological progress, facilitating rehabilitative and productive life, and improving quality of life by adding comfort, confidence and strength to patients and their families. For the further studies of this kind with chronic patients it is recommended that a sample of newly diagnosed patients be followed up for a longer period of time with more frequent observations to demonstrate a more dear- cut picture of the effectiveness of home care.

  • PDF

Establishment of the Appropriate Risk Standard through the Risk Assessment of Accident Scenario (사고시나리오별 위험도 산정을 통한 적정 위험도 기준 설정)

  • Kim, Kun-Ho;Chun, Young-Woo;Hwang, Yong-Woo;Lee, Ik-Mo;Kwak, In-ho
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.39 no.2
    • /
    • pp.74-81
    • /
    • 2017
  • An off-site consequence analysis is used to calculate the risks when hazardous chemicals that is being used on-site has been exposed off-site; the biggest factor that impacts the risk is the risks of accident scenarios. This study seeks to calculate risks according to accident scenarios by applying OGP/LOPA risk calculating methods for similar facilities, calculate risk reduction ratio by inspecting applicable IPL for incidents, and propose an appropriate risk standard for different risk calculating methods. Considering all applicable IPL when estimating the safety improvement of accident scenarios, the risk of OGP is 8.05E-04 and the risk of LOPA is 1.00E-04, According to the case of IPL, the risk is 1.34E-02. The optimal risk level for accident scenarios using LOPA was $10^{-2}$, but the appropriate risk criteria for accident scenarios in foreign similar studies were $10^{-3}{\sim}10^{-4}$, the risk of a scenario can be determined at an unacceptable level. When OGP is applied, it is analyzed as acceptable level, but in case of applying LOPA, all applicable IPL should be applied in order to satisfy the acceptable risk level. Compared to OGP, the risk is high when LOPA is applied. Therefore, the acceptable risk level should be set differently for each risk method.

Cloud P2P OLAP: Query Processing Method and Index structure for Peer-to-Peer OLAP on Cloud Computing (Cloud P2P OLAP: 클라우드 컴퓨팅 환경에서의 Peer-to-Peer OLAP 질의처리기법 및 인덱스 구조)

  • Joo, Kil-Hong;Kim, Hun-Dong;Lee, Won-Suk
    • Journal of Internet Computing and Services
    • /
    • v.12 no.4
    • /
    • pp.157-172
    • /
    • 2011
  • The latest active studies on distributed OLAP to adopt a distributed environment are mainly focused on DHT P2P OLAP and Grid OLAP. However, these approaches have its weak points, the P2P OLAP has limitations to multidimensional range queries in the cloud computing environment due to the nature of structured P2P. On the other hand, the Grid OLAP has no regard for adjacency and time series. It focused on its own sub set lookup algorithm. To overcome the above limits, this paper proposes an efficient central managed P2P approach for a cloud computing environment. When a multi-level hybrid P2P method is combined with an index load distribution scheme, the performance of a multi-dimensional range query is enhanced. The proposed scheme makes the OLAP query results of a user to be able to reused by other users' volatile cube search. For this purpose, this paper examines the combination of an aggregation cube hierarchy tree, a quad-tree, and an interval-tree as an efficient index structure. As a result, the proposed cloud P2P OLAP scheme can manage the adjacency and time series factor of an OLAP query. The performance of the proposed scheme is analyzed by a series of experiments to identify its various characteristics.

Establishment of a Safe Blasting Guideline for Pit Slopes in Pasir Coal Mine (파시르탄광의 사면안전을 위한 발파지침 수립 연구)

  • Choi, Byung-Hee;Ryu, Chang-Ha;SunWoo, Coon;Jung, Yong-Bok
    • Tunnel and Underground Space
    • /
    • v.18 no.6
    • /
    • pp.418-426
    • /
    • 2008
  • A surface blasting method with a single tree face is currently used in Pasir Coal Mine in Indonesia. The single free face is usually the ground surface. This kind of blasting method is easy to use but inevitably causes enormous ground vibrations, which, in turn, can affect the stability of the slopes comprising the various boundaries of the open pit mine. In this regard, we decided to make a specific blasting guideline for the control of found vibrations to ensure the safety of the pit slopes and waste dumps of the mine. Firstly, we derived a prediction equation for the ground vibration levels that could be occurred during blasting in the pits. Then, we set the allowable levels of ground vibrations for the pit slopes and waste dumps as peak particle velocities of 120mm/s and 60mm/s, respectively. From the prediction equation and allowable levels, safe scaled distances were established for field use. The blast design equations for the pit slopes and waste dumps were $D_s{\geq}5\;and\;D_S{\geq}10$ respectively. We also provide several standard blasting patterns for the hole depths of $3.3{sim}8.8m$.

Fundamental Study for Compaction Methods by Mechanical Tests (역학적 시험에 의한 다짐방법의 적합성 평가를 위한 기초연구)

  • Seo, Joo-Won;Choi, Jun-Seong;Kim, Jong-Min;Roh, Han-Seong;Kim, Soo-Il
    • International Journal of Highway Engineering
    • /
    • v.5 no.4 s.18
    • /
    • pp.23-35
    • /
    • 2003
  • In this study, compaction evaluating program based on ASTM critria is developed bu analyzing the results of laboratory tests. And the laboratory tests such as compaction test, triaxial test and resonance column test of subgrade soils are performed to develop compaction management methodology at seven test sites. Especially, to figure out chararteristic with changing compactive efforts, the test was carried out at five levels of compactive efforts at each soil sample. Database was set up from the test results. With the methodology using mechanical property - the elastic modulus, the gap between road design and management and road construction management is narrowed. The regression equation of G/$G_{max}$ is proposed at each strain level of subgrade soils according to AASHTO criteria, and the relationship between fundamental properties of soil mass and degree of compaction is derived as well. The development of compaction management and field compaction management method is proposed by the elastic modulus based on mechanical tests.

  • PDF

A New Cache Replacement Policy for Improving Last Level Cache Performance (라스트 레벨 캐쉬 성능 향상을 위한 캐쉬 교체 기법 연구)

  • Do, Cong Thuan;Son, Dong Oh;Kim, Jong Myon;Kim, Cheol Hong
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.871-877
    • /
    • 2014
  • Cache replacement algorithms have been developed in order to reduce miss counts. In modern processors, the performance gap between the processor and main memory has been increasing, creating a more important role for cache replacement policies. The Least Recently Used (LRU) policy is one of the most common policies used in modern processors. However, recent research has shown that the performance gap between the LRU and the theoretical optimal replacement algorithm (OPT) is large. Although LRU replacement has been proven to be adequate over and over again, the OPT/LRU performance gap is continuously widening as the cache associativity becomes large. In this study, we observed that there is a potential chance to improve cache performance based on existing LRU mechanisms. We propose a method that enhances the performance of the LRU replacement algorithm based on the access proportion among the lines in a cache set during a period of two successive replacement actions that make the final replacement action. Our experimental results reveals that the proposed method reduced the average miss rate of the baseline 512KB L2 cache by 15 percent when compared to conventional LRU. In addition, the performance of the processor that applied our proposed cache replacement policy improved by 4.7 percent over LRU, on average.