• Title/Summary/Keyword: optimal systems

Search Result 6,746, Processing Time 0.037 seconds

Application Plan of Goods Information in the Public Procurement Service for Enhancing U-City Plans (U-City계획 고도화를 위한 조달청 물품정보 활용 방안 : CCTV 사례를 중심으로)

  • PARK, Jun-Ho;PARK, Jeong-Woo;NAM, Kwang-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.3
    • /
    • pp.21-34
    • /
    • 2015
  • In this study, a reference model is constructed that provides architects or designers with sufficient information on the intelligent service facility that is essential for U-City space configuration, and for the support of enhanced design, as well as for planning activities. At the core of the reference model is comprehensive information about the intelligent service facility that plans the content of services, and the latest related information that is regularly updated. A plan is presented to take advantage of the database of list information systems in the Public Procurement Service that handles intelligent service facilities. We suggest a number of improvements by analyzing the current status of, and issues with, the goods information in the Public Procurement Service, and by conducting a simulation for the proper placement of CCTV. As the design of U-City plan has evolved from IT technology-based to smart space-based, reviews of limitations such as the lack of standards, information about the installation, and the placement of the intelligent service facility that provides U-service have been carried out. Due to the absence of relevant legislation and guidelines, however, planning activities, such as the appropriate placement of the intelligent service facility are difficult when considering efficient service provision. In addition, with the lack of information about IT technology and intelligent service facilities that can be provided to U-City planners and designers, there are a number of difficulties when establishing an optimal plan with respect to service level and budget. To solve these problems, this study presents a plan in conjunction with the goods information from the Public Procurement Service. The Public Procurement Service has already built an industry-related database of around 260,000 cases, which has been continually updated. It can be a very useful source of information about the intelligent service facility, the ever-changing U-City industry's core, and the relevant technologies. However, since providing this information is insufficient in the application process and, due to the constraints in the information disclosure process, there have been some issues in its application. Therefore, this study, by presenting an improvement plan for the linkage and application of the goods information in the Public Procurement Service, has significance for the provision of the basic framework for future U-City enhancement plans, and multi-departments' common utilization of the goods information in the Public Procurement Service.

A Study on the Sedimentation of Dredged Soils and Shape Changes of a Transparent Vinyl Tube by Filling Tests - Anti-Crater Formation - (준설토 주입방법에 의한 비닐튜브체의 퇴적 및 변형 특성 - 크레이터 방지 기술을 중심으로 -)

  • Kim, Hyeong-Joo;Sung, Hyun-Jong;Lee, Kwang-Hyung;Lee, Jang-Baek
    • Journal of the Korean Geosynthetics Society
    • /
    • v.13 no.2
    • /
    • pp.1-10
    • /
    • 2014
  • In this study, two different types of dredged fill injection methods are introduced and filling experiments were conducted to analyze the impact of each technique to the distribution and deposition of dredged soil fill and how it influence the final tube shape. Two transparent plastic tubes were fabricated to observe the deposition behavior of the deposited fill material. Both tubes measured 4.0 meters in length (L) and has vinyl tube diameters (D) of 0.5m and 0.7m. T-type and I-type inlet system are also introduced in this paper. The influence of this inlet systems to the distribution and deposition behavior of dredged soil fill inside the vinyl tubes were observed during the experiment. After the sedimentation of the slurry mixture, the water on top of the soil sediments are removed and the slurry mixture was re-injected into the vinyl tube, this process was carried out repeatedly. The shape changes of the vinyl tube, e.g. the changes in both tube height and width, are constantly monitored after each slurry injection and water draining phases. Crater formation was observed in the case of I-Type inlet system and a non-uniform sediment distribution occurred. For the diffusion deposit of soil particles to long distance are minimal shape technique using the T-Type inlet system. Therefore the undrain filling height ratio ($H/D_0$) was found to be around 0.54 to 0.64 and the horizontal strain ratio ($W/D_0$) ranges from 1.45 to 1.54. The filling soil height is proportional to dredged-material filling phases, but, horizontal strain ratio is constant or inversely reduced so that the center of tube body is raised in the upward direction.

Effects of Different EC in Nutrient Solution on Growth and Quality of Red Mustard and Pak-Choi in Plant Factory (식물공장내 양액의 EC가 적겨자와 청경채의 생육 및 품질에 미치는 영향)

  • Lee, Sang Gyu;Choi, Chang Sun;Lee, Jun Gu;Jang, Yoon Ah;Nam, Chun Woo;Yeo, Kyung-Hwan;Lee, Hee Ju;Um, Young Chul
    • Journal of Bio-Environment Control
    • /
    • v.21 no.4
    • /
    • pp.322-326
    • /
    • 2012
  • Recently, researches related to plant factory system has been activated and production of Ssam-vegetables using artificial lighting has been increasing. In South Korea, Ssam-vegetables are very popular and the consumption is increasing every year. Because leaf vegetables cultivated under hydroponic systems are more preferable rather than those cultivated by soil culture in Korea, the plant factory system would be more effective in production of Ssam-vegetables. Therefore, this study was carried out in order to analyze the yield and vitamin C contents in red mustard (Brassica juncea L.) and pak-choi (Brassica campestris var. chinensis), which are used a lot for the Ssam-vegetables in South Korea, as influenced by different concentrations of the nutrient solution in a plant factory system. As a results, there was no significant differences in the plant height among the treatment of EC in the nutrient solution, but for red mustard plants, the number of leaves tended to decrease in the treatment with higher EC. Leaf area of pak-choi plants was significantly increased in the higher EC, while the fresh weight had a tendency to increase along with increasing EC in the nutrient solution for both crops. The photosynthetic rates did not show a distinct tendency by EC levels for red mustard plants, but for pak-choi plants, it tended to be higher at the high EC. The contents of ascorbic acid in leaves were higher with decreasing EC concentration in the nutrient solution for red mustard plants, while the content was the highest at EC $2.0dS{\cdot}m^{-1}$ for pak-choi plants. In summary, considering the marketable yields and vitamin C at different nutrient concentrations in a plant factory, the optimal concentration for red mustard and pak-choi plants was thought to be EC $2.0{\sim}2.5dS{\cdot}m^{-1}$.

Financial Fraud Detection using Text Mining Analysis against Municipal Cybercriminality (지자체 사이버 공간 안전을 위한 금융사기 탐지 텍스트 마이닝 방법)

  • Choi, Sukjae;Lee, Jungwon;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.119-138
    • /
    • 2017
  • Recently, SNS has become an important channel for marketing as well as personal communication. However, cybercrime has also evolved with the development of information and communication technology, and illegal advertising is distributed to SNS in large quantity. As a result, personal information is lost and even monetary damages occur more frequently. In this study, we propose a method to analyze which sentences and documents, which have been sent to the SNS, are related to financial fraud. First of all, as a conceptual framework, we developed a matrix of conceptual characteristics of cybercriminality on SNS and emergency management. We also suggested emergency management process which consists of Pre-Cybercriminality (e.g. risk identification) and Post-Cybercriminality steps. Among those we focused on risk identification in this paper. The main process consists of data collection, preprocessing and analysis. First, we selected two words 'daechul(loan)' and 'sachae(private loan)' as seed words and collected data with this word from SNS such as twitter. The collected data are given to the two researchers to decide whether they are related to the cybercriminality, particularly financial fraud, or not. Then we selected some of them as keywords if the vocabularies are related to the nominals and symbols. With the selected keywords, we searched and collected data from web materials such as twitter, news, blog, and more than 820,000 articles collected. The collected articles were refined through preprocessing and made into learning data. The preprocessing process is divided into performing morphological analysis step, removing stop words step, and selecting valid part-of-speech step. In the morphological analysis step, a complex sentence is transformed into some morpheme units to enable mechanical analysis. In the removing stop words step, non-lexical elements such as numbers, punctuation marks, and double spaces are removed from the text. In the step of selecting valid part-of-speech, only two kinds of nouns and symbols are considered. Since nouns could refer to things, the intent of message is expressed better than the other part-of-speech. Moreover, the more illegal the text is, the more frequently symbols are used. The selected data is given 'legal' or 'illegal'. To make the selected data as learning data through the preprocessing process, it is necessary to classify whether each data is legitimate or not. The processed data is then converted into Corpus type and Document-Term Matrix. Finally, the two types of 'legal' and 'illegal' files were mixed and randomly divided into learning data set and test data set. In this study, we set the learning data as 70% and the test data as 30%. SVM was used as the discrimination algorithm. Since SVM requires gamma and cost values as the main parameters, we set gamma as 0.5 and cost as 10, based on the optimal value function. The cost is set higher than general cases. To show the feasibility of the idea proposed in this paper, we compared the proposed method with MLE (Maximum Likelihood Estimation), Term Frequency, and Collective Intelligence method. Overall accuracy and was used as the metric. As a result, the overall accuracy of the proposed method was 92.41% of illegal loan advertisement and 77.75% of illegal visit sales, which is apparently superior to that of the Term Frequency, MLE, etc. Hence, the result suggests that the proposed method is valid and usable practically. In this paper, we propose a framework for crisis management caused by abnormalities of unstructured data sources such as SNS. We hope this study will contribute to the academia by identifying what to consider when applying the SVM-like discrimination algorithm to text analysis. Moreover, the study will also contribute to the practitioners in the field of brand management and opinion mining.

Personalized Recommendation System for IPTV using Ontology and K-medoids (IPTV환경에서 온톨로지와 k-medoids기법을 이용한 개인화 시스템)

  • Yun, Byeong-Dae;Kim, Jong-Woo;Cho, Yong-Seok;Kang, Sang-Gil
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.147-161
    • /
    • 2010
  • As broadcasting and communication are converged recently, communication is jointed to TV. TV viewing has brought about many changes. The IPTV (Internet Protocol Television) provides information service, movie contents, broadcast, etc. through internet with live programs + VOD (Video on demand) jointed. Using communication network, it becomes an issue of new business. In addition, new technical issues have been created by imaging technology for the service, networking technology without video cuts, security technologies to protect copyright, etc. Through this IPTV network, users can watch their desired programs when they want. However, IPTV has difficulties in search approach, menu approach, or finding programs. Menu approach spends a lot of time in approaching programs desired. Search approach can't be found when title, genre, name of actors, etc. are not known. In addition, inserting letters through remote control have problems. However, the bigger problem is that many times users are not usually ware of the services they use. Thus, to resolve difficulties when selecting VOD service in IPTV, a personalized service is recommended, which enhance users' satisfaction and use your time, efficiently. This paper provides appropriate programs which are fit to individuals not to save time in order to solve IPTV's shortcomings through filtering and recommendation-related system. The proposed recommendation system collects TV program information, the user's preferred program genres and detailed genre, channel, watching program, and information on viewing time based on individual records of watching IPTV. To look for these kinds of similarities, similarities can be compared by using ontology for TV programs. The reason to use these is because the distance of program can be measured by the similarity comparison. TV program ontology we are using is one extracted from TV-Anytime metadata which represents semantic nature. Also, ontology expresses the contents and features in figures. Through world net, vocabulary similarity is determined. All the words described on the programs are expanded into upper and lower classes for word similarity decision. The average of described key words was measured. The criterion of distance calculated ties similar programs through K-medoids dividing method. K-medoids dividing method is a dividing way to divide classified groups into ones with similar characteristics. This K-medoids method sets K-unit representative objects. Here, distance from representative object sets temporary distance and colonize it. Through algorithm, when the initial n-unit objects are tried to be divided into K-units. The optimal object must be found through repeated trials after selecting representative object temporarily. Through this course, similar programs must be colonized. Selecting programs through group analysis, weight should be given to the recommendation. The way to provide weight with recommendation is as the follows. When each group recommends programs, similar programs near representative objects will be recommended to users. The formula to calculate the distance is same as measure similar distance. It will be a basic figure which determines the rankings of recommended programs. Weight is used to calculate the number of watching lists. As the more programs are, the higher weight will be loaded. This is defined as cluster weight. Through this, sub-TV programs which are representative of the groups must be selected. The final TV programs ranks must be determined. However, the group-representative TV programs include errors. Therefore, weights must be added to TV program viewing preference. They must determine the finalranks.Based on this, our customers prefer proposed to recommend contents. So, based on the proposed method this paper suggested, experiment was carried out in controlled environment. Through experiment, the superiority of the proposed method is shown, compared to existing ways.

Development and Application of the High Speed Weigh-in-motion for Overweight Enforcement (고속축하중측정시스템 개발과 과적단속시스템 적용방안 연구)

  • Kwon, Soon-Min;Suh, Young-Chan
    • International Journal of Highway Engineering
    • /
    • v.11 no.4
    • /
    • pp.69-78
    • /
    • 2009
  • Korea has achieved significant economic growth with building the Gyeongbu Expressway. As the number of new road construction projects has decreased, it becomes more important to maintain optimal status of the current road networks. One of the best ways to accomplish it is weight enforcement as active control measure of traffic load. This study is to develop High-speed Weigh-in-motion System in order to enhance efficiency of weight enforcement, and to analyze patterns of overloaded trucks on highways through the system. Furthermore, it is to review possibilities of developing overweight control system with application of the HS-WIM system. The HS-WIM system developed by this study consists of two sets of an axle load sensor, a loop sensor and a wandering sensor on each lane. A wandering sensor detects whether a travelling vehicle is off the lane or not with the function of checking the location of tire imprint. The sensor of the WIM system has better function of classifying types of vehicles than other existing systems by detecting wheel distance and tire type such as single or dual tire. As a result, its measurement errors regarding 12 types of vehicle classification are very low, which is an advantage of the sensor. The verification tests of the system under all conditions showed that the mean measurement errors of axle weight and gross axle weight were within 15 percent and 7 percent respectively. According to the WIM rate standard of the COST-323, the WIM system of this study is ranked at B(10). It means the system is appropriate for the purpose of design, maintenance and valuation of road infrastructure. The WIM system in testing a 5-axle cargo truck, the most frequently overloaded vehicle among 12 types of vehicles, is ranked at A(5) which means the system is available to control overloaded vehicles. In this case, the measurement errors of axle load and gross axle load were within 8 percent and 5 percent respectively. Weight analysis of all types of vehicles on highways showed that the most frequently overloaded vehicles were type 5, 6, 7 and 12 among 12 vehicle types. As a result, it is necessary to use more effective overweight enforcement system for vehicles which are seriously overloaded due to their lift axles. Traffic volume data depending upon vehicle types is basic information for road design and construction, maintenance, analysis of traffic flow, road policies as well as research.

  • PDF

Analysis of Factors Influencing the Integrated Bolus Peak Timing in Contrast-Enhanced Brain Computed Tomographic Angiography (Computed Tomographic Angiography (CTA)의 검사 시 조영제 집적 정점시간에 영향을 미치는 특성 인자를 분석)

  • Son, Soon-Yong;Choi, Kwan-Woo;Jeong, Hoi-Woun;Jang, Seo-Goo;Jung, Jae-Yong;Yun, Jung-Soo;Kim, Ki-Won;Lee, Young-Ah;Son, Jin-Hyun;Min, Jung-Whan
    • Journal of radiological science and technology
    • /
    • v.39 no.1
    • /
    • pp.59-69
    • /
    • 2016
  • The objective of this study was to analyze the factors influencing integrated bolus peak timing in contrast-enhanced computed tomographic angiography (CTA) and to determine a method of calculating personal peak time. The optimal time was calculated by performing multiple linear regression analysis, after finding the influence factors through correlation analysis between integrated peak time of contrast medium and personal measured value by monitoring CTA scans. The radiation exposure dose in CTA was $716.53mGy{\cdot}cm$ and the radiation exposure dose in monitoring scan was 15.52 mGy (2 - 34 mGy). The results were statistically significant (p < .01). Regression analysis revealed, a -0.160 times decrease with a one-step increase in heart rate in male, and -0.004, -0.174, and 0.006 times decrease with one-step in DBP, heart rate, and blood sugar, respectively, in female. In a consistency test of peak time by calculating measured peak time and peak time by using the regression equation, the consistency was determined to be very high for male and female. This study could prevent unnecessary dose exposure by encouraging in clinic calculation of personal integrated peak time of contrast medium prior to examination.

China's Government Audit and Governance Efficiency of Companies: Analyses of Listed Companies Controlled By China's Central State-Owned Enterprises (중국의 정부감사와 기업의 관리효율성 : 중국 중앙기업 상장자회사 분석)

  • Choe, Kuk-Hyun;Sun, Quan
    • International Area Studies Review
    • /
    • v.22 no.4
    • /
    • pp.55-75
    • /
    • 2018
  • In China, different from the private enterprises or the locally-administered state enterprises, central state-owned enterprises generally spread over cornerstone industry which is greatly influenced by the public policy, which results in the objective existence of government influence in their productive activities. As the strategic resource, listed companies controlled by central state-owned enterprises, mostly distributed in the lifeblood and security of key industries. Therefore, listed companies controlled by central state-owned enterprises' governance efficiency play an important role in optimal allocation of state-owned assets, improve capital operation, improve the return on capital, and maintain state-owned assets safety. As the immune systems of national governance, the government audit strengthen the supervision of listed companies controlled by central state-owned enterprises in case of the loss of state-owned assets and significant risk events occur, to ensure that the value of state-owned assets. As an important component of national governance, government audit produced in entrusted with the economic responsibility of public relationship. Government audit can play an important role in maintaining financial security and corruption, and also improve listed company's accounting stability and transparency. While government audit can improve governance efficiency and maintain state-owned assets safety, present literature is scarce. Under the corporate governance theory and the economical responsibility theory, the thesis select data from 2010-2017 to verify the relationship between government audit and listed companies controlled by central state-owned enterprises' corporate performance. Results show that listed companies controlled by central state-owned enterprises are more likely to be audited by government of poor performance. Results also show that the government audit will have a promoting effect on listed companies controlled by central state-owned enterprises, and through to the improvement of the governance efficiency will enhance its companies' value. The results show that China's government audit has appealing role in accomplishing central state-owned enterprises to realize the business objectives and in promoting the governance efficiency.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

UX Methodology Study by Data Analysis Focusing on deriving persona through customer segment classification (데이터 분석을 통한 UX 방법론 연구 고객 세그먼트 분류를 통한 페르소나 도출을 중심으로)

  • Lee, Seul-Yi;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.151-176
    • /
    • 2021
  • As the information technology industry develops, various kinds of data are being created, and it is now essential to process them and use them in the industry. Analyzing and utilizing various digital data collected online and offline is a necessary process to provide an appropriate experience for customers in the industry. In order to create new businesses, products, and services, it is essential to use customer data collected in various ways to deeply understand potential customers' needs and analyze behavior patterns to capture hidden signals of desire. However, it is true that research using data analysis and UX methodology, which should be conducted in parallel for effective service development, is being conducted separately and that there is a lack of examples of use in the industry. In thiswork, we construct a single process by applying data analysis methods and UX methodologies. This study is important in that it is highly likely to be used because it applies methodologies that are actively used in practice. We conducted a survey on the topic to identify and cluster the associations between factors to establish customer classification and target customers. The research methods are as follows. First, we first conduct a factor, regression analysis to determine the association between factors in the happiness data survey. Groups are grouped according to the survey results and identify the relationship between 34 questions of psychological stability, family life, relational satisfaction, health, economic satisfaction, work satisfaction, daily life satisfaction, and residential environment satisfaction. Second, we classify clusters based on factors affecting happiness and extract the optimal number of clusters. Based on the results, we cross-analyzed the characteristics of each cluster. Third, forservice definition, analysis was conducted by correlating with keywords related to happiness. We leverage keyword analysis of the thumb trend to derive ideas based on the interest and associations of the keyword. We also collected approximately 11,000 news articles based on the top three keywords that are highly related to happiness, then derived issues between keywords through text mining analysis in SAS, and utilized them in defining services after ideas were conceived. Fourth, based on the characteristics identified through data analysis, we selected segmentation and targetingappropriate for service discovery. To this end, the characteristics of the factors were grouped and selected into four groups, and the profile was drawn up and the main target customers were selected. Fifth, based on the characteristics of the main target customers, interviewers were selected and the In-depthinterviews were conducted to discover the causes of happiness, causes of unhappiness, and needs for services. Sixth, we derive customer behavior patterns based on segment results and detailed interviews, and specify the objectives associated with the characteristics. Seventh, a typical persona using qualitative surveys and a persona using data were produced to analyze each characteristic and pros and cons by comparing the two personas. Existing market segmentation classifies customers based on purchasing factors, and UX methodology measures users' behavior variables to establish criteria and redefine users' classification. Utilizing these segment classification methods, applying the process of producinguser classification and persona in UX methodology will be able to utilize them as more accurate customer classification schemes. The significance of this study is summarized in two ways: First, the idea of using data to create a variety of services was linked to the UX methodology used to plan IT services by applying it in the hot topic era. Second, we further enhance user classification by applying segment analysis methods that are not currently used well in UX methodologies. To provide a consistent experience in creating a single service, from large to small, it is necessary to define customers with common goals. To this end, it is necessary to derive persona and persuade various stakeholders. Under these circumstances, designing a consistent experience from beginning to end, through fast and concrete user descriptions, would be a very effective way to produce a successful service.