Kareem, Kola Yusuff;Seong, Yeonjeong;Jung, Younghun
Journal of Korean Society of Disaster and Security
/
v.14
no.4
/
pp.17-27
/
2021
Streamflow prediction is a very vital disaster mitigation approach for effective flood management and water resources planning. Lately, torrential rainfall caused by climate change has been reported to have increased globally, thereby causing enormous infrastructural loss, properties and lives. This study evaluates the contribution of rainfall to streamflow prediction in normal and peak rainfall scenarios, typical of the recent flood at Piney Resort in Vernon, Hickman County, Tennessee, United States. Daily streamflow, water level, and rainfall data for 20 years (2000-2019) from two USGS gage stations (03602500 upstream and 03599500 downstream) of the Piney River watershed were obtained, preprocesssed and fitted with Long short term memory (LSTM) model. Tensorflow and Keras machine learning frameworks were used with Python to predict streamflow values with a sequence size of 14 days, to determine whether the model could have predicted the flooding event in August 21, 2021. Model skill analysis showed that LSTM model with full data (water level, streamflow and rainfall) performed better than the Naive Model except some rainfall models, indicating that only rainfall is insufficient for streamflow prediction. The final LSTM model recorded optimal NSE and RMSE values of 0.68 and 13.84 m3/s and predicted peak flow with the lowest prediction error of 11.6%, indicating that the final model could have predicted the flood on August 24, 2021 given a peak rainfall scenario. Adequate knowledge of rainfall patterns will guide hydrologists and disaster prevention managers in designing efficient early warning systems and policies aimed at mitigating flood risks.
This research paper explores the application of FinBERT, a variational BERT-based model pre-trained on financial domain, for sentiment analysis in the financial domain while focusing on the process of identifying suitable training data and hyperparameters. Our goal is to offer a comprehensive guide on effectively utilizing the FinBERT model for accurate sentiment analysis by employing various datasets and fine-tuning hyperparameters. We outline the architecture and workflow of the proposed approach for fine-tuning the FinBERT model in this study, emphasizing the performance of various datasets and hyperparameters for sentiment analysis tasks. Additionally, we verify the reliability of GPT-3 as a suitable annotator by using it for sentiment labeling tasks. Our results show that the fine-tuned FinBERT model excels across a range of datasets and that the optimal combination is a learning rate of 5e-5 and a batch size of 64, which perform consistently well across all datasets. Furthermore, based on the significant performance improvement of the FinBERT model with our Twitter data in general domain compared to our news data in general domain, we also express uncertainty about the model being further pre-trained only on financial news data. We simplify the complex process of determining the optimal approach to the FinBERT model and provide guidelines for selecting additional training datasets and hyperparameters within the fine-tuning process of financial sentiment analysis models.
KIPS Transactions on Software and Data Engineering
/
v.12
no.5
/
pp.199-206
/
2023
The current software becomes the huge size of source codes. Therefore it is increasing the importance and necessity of static analysis for high-quality product. With static analysis of the code, it needs to identify the defect and complexity of the code. Through visualizing these problems, we make it guild for developers and stakeholders to understand these problems in the source codes. Our previous visualization research focused only on the process of storing information of the results of static analysis into the Database tables, querying the calculations for quality indicators (CK Metrics, Coupling, Number of function calls, Bad-smell), and then finally visualizing the extracted information. This approach has some limitations in that it takes a lot of time and space to analyze a code using information extracted from it through static analysis. That is since the tables are not normalized, it may occur to spend space and time when the tables(classes, functions, attributes, Etc.) are joined to extract information inside the code. To solve these problems, we propose a regularized design of the database tables, an extraction mechanism for quality metric indicators inside the code, and then a visualization with the extracted quality indicators on the code. Through this mechanism, we expect that the code visualization process will be optimized and that developers will be able to guide the modules that need refactoring. In the future, we will conduct learning of some parts of this process.
Accurate segmentation of the kidney tumor is necessary to identify shape, location and safety margin of tumor in abdominal CT images for surgical planning before renal partial nephrectomy. However, kidney tumor segmentation is challenging task due to the various sizes and locations of the tumor for each patient and signal intensity similarity to surrounding organs such as intestine and spleen. In this paper, we propose a semi-supervised learning-based mean teacher network that utilizes both labeled and unlabeled data using a kidney local guided map including kidney local information to segment small-sized kidney tumors occurring at various locations in the kidney, and analyze the performance according to the kidney tumor size. As a result of the study, the proposed method showed an F1-score of 75.24% by considering local information of the kidney using a kidney local guide map to locate the tumor existing around the kidney. In particular, under-segmentation of small-sized tumors which are difficult to segment was improved, and showed a 13.9%p higher F1-score even though it used a smaller amount of labeled data than nnU-Net.
Magazine of the Korean Society of Agricultural Engineers
/
v.16
no.1
/
pp.3293-3301
/
1974
An experimental work was conducted by using a laboratory-made model dryer to investigate the effect of the rate of natural forced-air on the drying rate of rough rice which was deposited in the deep-bed. The dryer consisted of 8 cylinderical containers with grain holding screen at their bottoms, each of which having 30cm in diameter and 15cm in height. The containers were sacked vertically with keeping them air-tight by using paper tape during dryer operation. Two separate layers of containers were operated in the same time to have two replications. The moisture contents of grains within each bins after predetermined period of dryer operation were determined indirectly by measuring the weight of the individual containers. The air-rates were maintained at 6 levels, or 5, 8, 10, 15, 18 and 20 millimenters of static head of water. The roomair conditions during dryer operation were maintained in the range of 10-l5$^{\circ}C$ in temperature and 40-60% in relative humidity. The results of the study are summarized as follows: 1. Drying characteristics of the grains in the bottom layers were approximately the same regardless of airdelivery rates, giving the average drying rate as about 0.35 percent per hour after 40-hour drying period, during which moisture content (w. b.) reduced from 24 percent to about 10 percent. 2. After about 40-hour drying period, the mean drying rates increased from 0.163 percent per hour to 0.263 percent per hour as air-flow rates increased from 5mm to 87.16mm of static head of water. In the same time, the moisture differences of grains between lower and upper layers varied from 12.7 percent at the air rate of 5mm of water head to 7.5 percent at the air-flow rate of 20mn of water head. Thus, the greater the air-flow rate was, the more overall improvement in drying performance was. Additionally, from the result of ineffectiveness of drying grain positioned at 70cm depth or above by the air rate of 5mm of static head of water it may be suggested in practical application that the height of grain deposit would be maintained adequately within the limits of air-rates that may be actually delivered. 3. Drying after layer-turning operation was continued for about 30 hours to test the effectiveness of reducing moisture differences in the thick layers. As a result of this layer-turning operation, moisture distribution through layers approached to narrow ranges, giving the moisture range as about 7 percent at air-flow rate of 5mm head of water, about 3 percent at 10mm head about 2 percent at 15mm head, and less than 1 percent at 20mm head. In addition, from the desirable results that drying rate was rapid in the lower layers and dully in the upper layers, layer-turning operation may be very effective in natural air drying with deep-layer grain deposit, especially when the forced air was kept in low rate. 4. Even though the high rate of air delivery is very desirable for deep-layer natural-air drying of rough rice, it can be happened that the required air delivery rate could not be attained because of limitation of power source available on farms. To give a guide line for the practical application, the power required to perform the drying with the specified air rate was analyzed for different sizes of drying bin and is given in Table (5). If a farmer selects a motor of which size is 1 or {{{{1 { 1} over {2 } }}}} H.P. and air-delivery rate which ranges from 8~10mm of head, the diameter of grain bin may be suggested to choose about 2.4m, also power tiller or other moderate size of prime motor may be recommended when the diameter of grain bin is about 5.0m or more for about 120cm grain deposit.
Hwang, Taik Gun;Kim, Soon Deok;Yoo, Se Hwa;Shin, Yoo Chul
Tuberculosis and Respiratory Diseases
/
v.56
no.5
/
pp.485-494
/
2004
Background : To assess the effects of mDOT implementation on sputum smear conversion for AFB (Acid fast bacilli) positive pulmonary tuberculosis patients, modified Directly Observed Treatment (mDOT) was started on October $8^{th}$ 2001 at a health center in Seoul. mDOT was defined through weekly interviewing and supervising of a patient by a supervisor (doctor, nurse, or lay health worker). The sputum smear conversion of a mDOT group was compared with that of a self-medication (self) group. Methods : This study included 52 AFB positive pulmonary tuberculosis patients registered at a health center in Seoul between October $8^{th}$ 2001 and April $23^{rd}$ 2002. 24 and 28 patients were enrolled in the mDOT and self medication groups, respectively. Paired (1:1) individual matching, by gender, extent of disease, relapse and age-matching variables, was performed between the two groups, resulting in 20 paired matches. This prospective study was planned as an unblinded, non-randomized quasiexperimental pilot project. Outcomes were identified from results of sputum smear examinations for AFB in both groups at 2 weeks, and 1 and 2 months. The paired matching data were analyzed using the SAS program version 8.1 by McNemar test. Results : At the end of 2 weeks of treatment, the sputum smear conversion of the mDOT group was somewhat higher than that of the self medication group (78.57 vs. 50%, p-value=0.289), and after 1 month of treatment no statistically significant difference was shown between the two groups (83.33 vs. 50, p-value=0.125). At the end of 2 months of treatment (initial intensive phase), the sputum smear conversions of the mDOT and self groups were 95 and 75%, respectively (p-value=0.219). Conclusions : The implementation of mDOT did not result in clinically significant increases in the sputum smear conversion at 2 weeks, and 1 and 2 months compared with that of the self medication group. However, the increases experienced might contribute to diminishing the infectious period of AFB positive patients, and this approach may act as a guide for a specific group of patients. In this study, mDOT was performed for one hundred percent of the intensive treatment phase. It can also be an effective treatment for pulmonary tuberculosis patients, and may be useful for some high risk tuberculosis patients.
Journal of Korean Home Economics Education Association
/
v.17
no.2
/
pp.29-47
/
2005
In this study. we tried to provide basic materials for teachers to develop consumer's guide of internet shopping for middle and high school students through surveying their Purchase realities, clothing purchase behaviors. and clothing purchase attitudes when they use internet shopping mall. The questionaires were distributed to middle and high school students in Seoul, Daegu, Kyunggi, Chungbuk, Chungnam, Kyungbuk, and Kyungnam November, 2004. The followings are the results of this study. First, clothing items which were bought in internet shopping malls were shirts. shoes, pants, bags in order and they were below $20,000\~30,000$ won. Main payment method used was sending money to seller's account. Second, clothing purchase satisfaction degree was comparatively high but the satisfaction degree for the compensation policy was low. If they had any claims for the products. they were likely to behave more actively than passively. Third. returned items were shirts, pants, shoes in order which are the same as purchasing items and they were due to the size and the difference between the products recognised by computer screen and the real products. The $89.0\%$ of the subjects who have purchased clothing through internet expressed high intention to purchase in the future through internet. Forth the degree of attitude toward the internet shopping concerned with clothing purchase was high in the factor of 'convenience of shopping', especially they thought that the purchase through internet had the advantage of varieties and prices. The significant differences were found (1) in the experience of purchase and clothing purchase through internet according to their regions. school years, allowances per month, (2) in the purchased items through internet according to only their sexes, and (3) in the desired Purchase items through internet according to school years, their sexes. regions. The more frequently the middle and high school students use internet, the more goods they purchase through internet, especially the portion of the purchased clothing is getting bigger year by year. This suggests that we need to develope well-organized programs to teach good consumer's attitude to the middle and high school students when they purchase through internet.
Background : The causes of solitary pulmonary nodule are many, but the main concern is whether the nodule is benign or malignant. Because a solitary pulmonary nodule is the initial manifestation of the majority of lung cancer, accurate clinical and radiologic interpretation is important. Bayes' theorem is a simple method of combining clinical and radiologic findings to estimate the probability that a nodule in an individual patients is malignant. We estimated the probability of malignancy of solitary pulmonary nodules with a specific combination of features by Bayesian approach. Method : One hundred and eighty patients with solitary pulmonary nodules were identified from multi-center analysis. The hospital records of these patients were reviewed and patient age, smoking history, original radiologic findings, and diagnosis of the solitary pulmonary nodules were recorded. The diagnosis of solitary pulmonary nodule was established pathologically in all patients. We used to Bayes' theorem to devise a simple scheme for estimating the likelihood that a solitary pulmonary nodule is malignant based on radiological and clinical characteristics. Results : In patients characteristics, the probability of malignancy increases with advancing age, peaking in patients older than 66 year of age(LR : 3.64), and higher in patients with smoking history more than 46 pack years(LR : 8.38). In radiological features, the likelihood ratios were increased with increasing size of the nodule and nodule with lobulated or spiculated margin. Conclusion : In conclusion, the likelihood ratios of malignancy may improve the accuracy of the probability of malignancy, and can be a guide of management of solitary pulmonary nodule.
As the Internet becomes ubiquitous, a large volume of information is posted on the Internet with exponential growth every day. Accordingly, it is not unusual that investors in stock markets gather and compile firm-specific or market-wide information through online searches. Importantly, it becomes easier for investors to acquire value-relevant information for their investment decision with the help of powerful search tools on the Internet. Our study examines whether or not the Internet helps investors assess a firm's value better by using firm-level data over long periods spanning from January 2004 to December 2013. To this end, we construct weekly-based search volume for information technology (IT) services firms on the Internet. We limit our focus to IT firms since they are often equipped with intangible assets and relatively less recognized to the public which makes them hard-to measure. To obtain the information on those firms, investors are more likely to consult the Internet and use the information to appreciate the firms more accurately and eventually improve their investment decisions. Prior studies have shown that changes in search volumes can reflect the various aspects of the complex human behaviors and forecast near-term values of economic indicators, including automobile sales, unemployment claims, and etc. Moreover, search volume of firm names or stock ticker symbols has been used as a direct proxy of individual investors' attention in financial markets since, different from indirect measures such as turnover and extreme returns, they can reveal and quantify the interest of investors in an objective way. Following this line of research, this study aims to gauge whether the information retrieved from the Internet is value relevant in assessing a firm. We also use search volume for analysis but, distinguished from prior studies, explore its impact on return comovements with market returns. Given that a firm's returns tend to comove with market returns excessively when investors are less informed about the firm, we empirically test the value of information by examining the association between Internet searches and the extent to which a firm's returns comove. Our results show that Internet searches are negatively associated with return comovements as expected. When sample is split by the size of firms, the impact of Internet searches on return comovements is shown to be greater for large firms than small ones. Interestingly, we find a greater impact of Internet searches on return comovements for years from 2009 to 2013 than earlier years possibly due to more aggressive and informative exploit of Internet searches in obtaining financial information. We also complement our analyses by examining the association between return volatility and Internet search volumes. If Internet searches capture investors' attention associated with a change in firm-specific fundamentals such as new product releases, stock splits and so on, a firm's return volatility is likely to increase while search results can provide value-relevant information to investors. Our results suggest that in general, an increase in the volume of Internet searches is not positively associated with return volatility. However, we find a positive association between Internet searches and return volatility when the sample is limited to larger firms. A stronger result from larger firms implies that investors still pay less attention to the information obtained from Internet searches for small firms while the information is value relevant in assessing stock values. However, we do find any systematic differences in the magnitude of Internet searches impact on return volatility by time periods. Taken together, our results shed new light on the value of information searched from the Internet in assessing stock values. Given the informational role of the Internet in stock markets, we believe the results would guide investors to exploit Internet search tools to be better informed, as a result improving their investment decisions.
As the utilization of information technology and the turbulence of technological change increase in organizations, the adoption of IT outsourcing also grows to manage IT resource more effectively and efficiently. In this new way of IT management technique, service level management(SLM) process becomes critical to derive success from the outsourcing in the view of end users in organization. Even though much of the research on service level management or agreement have been done during last decades, the performance of the service level management process have not been evaluated in terms of final objectives of the management efforts or success from the view of end-users. This study explores the relationship between SLM maturity and IT outsourcing success from the users' point of view by a analytical case study in four client organizations under an IT outsourcing vendor, which is a member company of a major Korean conglomerate. For setting up a model for the analysis, previous researches on service level management process maturity and information systems success are reviewed. In particular, information systems success from users' point of view are reviewed based the DeLone and McLean's study, which is argued and accepted as a comprehensively tested model of information systems success currently. The model proposed in this study argues that SLM process maturity influences information systems success, which is evaluated in terms of information quality, systems quality, service quality, and net effect proposed by DeLone and McLean. SLM process maturity can be measured in planning process, implementation process and operation and evaluation process. Instruments for measuring the factors in the proposed constructs of information systems success and SL management process maturity were collected from previous researches and evaluated for securing reliability and validity, utilizing appropriate statistical methods and pilot tests before exploring the case study. Four cases from four different companies under one vendor company were utilized for the analysis. All of the cases had been contracted in SLA(Service Level Agreement) and had implemented ITIL(IT Infrastructure Library), Six Sigma and BSC(Balanced Scored Card) methods since last several years, which means that all the client organizations pursued concerted efforts to acquire quality services from IT outsourcing from the organization and users' point of view. For comparing the differences among the four organizations in IT out-sourcing sucess, T-test and non-parametric analysis have been applied on the data set collected from the organization using survey instruments. The process maturities of planning and implementation phases of SLM are found not to influence on any dimensions of information systems success from users' point of view. It was found that the SLM maturity in the phase of operations and evaluation could influence systems quality only from users' view. This result seems to be quite against the arguments in IT outsourcing practices in the fields, which emphasize usually the importance of planning and implementation processes upfront in IT outsourcing projects. According to after-the-fact observation by an expert in an organization participating in the study, their needs and motivations for outsourcing contracts had been quite familiar already to the vendors as long-term partners under a same conglomerate, so that the maturity in the phases of planning and implementation seems not to be differentiating factors for the success of IT outsourcing. This study will be the foundation for the future research in the area of IT outsourcing management and success, in particular in the service level management. And also, it could guide managers in practice in IT outsourcing management to focus on service level management process in operation and evaluation stage especially for long-term outsourcing contracts under very unique context like Korean IT outsourcing projects. This study has some limitations in generalization because the sample size is small and the context itself is confined in an unique environment. For future exploration, survey based research could be designed and implemented.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.