• Title/Summary/Keyword: Big data Processing

Search Result 1,063, Processing Time 0.028 seconds

Software Equation Based on Function Points (기능점수 기반 소프트웨어 공식)

  • Lee, Sang-Un
    • The KIPS Transactions:PartD
    • /
    • v.17D no.5
    • /
    • pp.327-336
    • /
    • 2010
  • This paper proposed software equation that is relation with effort and duration based on function point (FP) software size. Existent software equation based on lines of code (LOC). LOC sees big difference according to development language and there are a lot of difficulties in software size estimation. First, considered method that change LOC to FP. But, this method is not decided definitely conversion ratio between LOC and FP by development language. Also, failed though the conversion ratio motives software formula because was not presented about specification development language. Therefore, we derived software formula directly to large project data that was developed by FP. Firstly, datas that reasonable development period is set among development projects. Secondly, FP through regression analysis about this data and effort, motived relation with FP and duration. Finally, software equation was derived from these relation. Proposed model solves application problems that LOC-based model has and has advantage that application is possible easily in business.

A 500MSamples/s 6-Bit CMOS Folding and Interpolating AD Converter (500MSamples/s 6-비트 CMOS 폴딩-인터폴레이팅 아날로그-디지털 변환기)

  • Lee Don-Suep;Kwack Kae-Dal
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.7
    • /
    • pp.1442-1447
    • /
    • 2004
  • In this paper, a 6-Bit CMOS Folding and Interpolating AD Converter is presented. The converter is considered to be useful as an integrated part of a VLSI circuit handling both analog and digital signals as in the case of HDD or LAN applications. A built-in analog circuit for VLSI of a high-speed data communication requires a small chip area, low power consumption, and fast data processing. The proposed folding and interpolating AD Converter uses a very small number of comparators and interpolation resistors, which is achieved by cascading a couple of folders working in different principles. This reduced number of parts is a big advantage for a built-in AD converter design. The design is based on 0.25m double-poly 2 metal n-well CMOS process. In the simulation, with the applied 2.5V and a sampling frequency of 500MHz, the measurements are as follows: power consumption of 27mw, INL and DNL of $\pm$0.1LSB, $\pm$0.15LSB each, SNDR of 42dB with an input signal of 10MHz.

Extracting Core Events Based on Timeline and Retweet Analysis in Twitter Corpus (트위터 문서에서 시간 및 리트윗 분석을 통한 핵심 사건 추출)

  • Tsolmon, Bayar;Lee, Kyung-Soon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.69-74
    • /
    • 2012
  • Many internet users attempt to focus on the issues which have posted on social network services in a very short time. When some social big issue or event occurred, it will affect the number of comments and retweet on that day in twitter. In this paper, we propose the method of extracting core events based on timeline analysis, sentiment feature and retweet information in twitter data. To validate our method, we have compared the methods using only the frequency of words, word frequency with sentiment analysis, using only chi-square method and using sentiment analysis with chi-square method. For justification of the proposed approach, we have evaluated accuracy of correct answers in top 10 results. The proposed method achieved 94.9% performance. The experimental results show that the proposed method is effective for extracting core events in twitter corpus.

A Study on Polynomial Neural Networks for Stabilized Deep Networks Structure (안정화된 딥 네트워크 구조를 위한 다항식 신경회로망의 연구)

  • Jeon, Pil-Han;Kim, Eun-Hu;Oh, Sung-Kwun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.66 no.12
    • /
    • pp.1772-1781
    • /
    • 2017
  • In this study, the design methodology for alleviating the overfitting problem of Polynomial Neural Networks(PNN) is realized with the aid of two kinds techniques such as L2 regularization and Sum of Squared Coefficients (SSC). The PNN is widely used as a kind of mathematical modeling methods such as the identification of linear system by input/output data and the regression analysis modeling method for prediction problem. PNN is an algorithm that obtains preferred network structure by generating consecutive layers as well as nodes by using a multivariate polynomial subexpression. It has much fewer nodes and more flexible adaptability than existing neural network algorithms. However, such algorithms lead to overfitting problems due to noise sensitivity as well as excessive trainning while generation of successive network layers. To alleviate such overfitting problem and also effectively design its ensuing deep network structure, two techniques are introduced. That is we use the two techniques of both SSC(Sum of Squared Coefficients) and $L_2$ regularization for consecutive generation of each layer's nodes as well as each layer in order to construct the deep PNN structure. The technique of $L_2$ regularization is used for the minimum coefficient estimation by adding penalty term to cost function. $L_2$ regularization is a kind of representative methods of reducing the influence of noise by flattening the solution space and also lessening coefficient size. The technique for the SSC is implemented for the minimization of Sum of Squared Coefficients of polynomial instead of using the square of errors. In the sequel, the overfitting problem of the deep PNN structure is stabilized by the proposed method. This study leads to the possibility of deep network structure design as well as big data processing and also the superiority of the network performance through experiments is shown.

Development of CCD(Corrosion Control Document) in Refinery Process (정유공정의 CCD(Corrosion Control Document) 개발)

  • Kim, Jung-Hwan;Kim, Ji-Yong;Lee, Young-Hee;Park, Sang-Rok;Suh, Sun-Kyu;Lee, Yoon-Hwa;Moon, Il
    • Journal of the Korean Society of Safety
    • /
    • v.24 no.1
    • /
    • pp.31-36
    • /
    • 2009
  • This paper focuses on techniques of improving refinery reliability, availability, and profitability. Our team developed a corrosion control document(CCD) for processing of the crude distillation unit(CDU). Recent study shows the loss due to corrosion in US is around $276 billion. It's a big concern for both managers and engineers of refinery industry. The CCD consists of numerous parts namely damage mechanism(DM), design data, critical reliability variable(CRV), guidelines, etc. The first step in the development of CCD is to build material selection diagram(MSD). Damage mechanisms affecting equipments and process need to be chosen carefully based on API 571. The selected nine DM from API 571 are (1) creep/stress rupture, (2) fuel ash corrosion, (3) oxidation, (4) high temperature sulfidation, (5) naphthenic acid corrosion, (6) hydrochloric acid(HCL) corrosion, (7) ammonium chloride(salt) corrosion, (8) wet $H_2S$ corrosion, and (9) ammonia stress corrosion cracking. Each DM related to corrosion of CDU process was selected by design data, P&ID, PFD, corrosion loop, flow of process, equipment's history, and experience. Operating variables affecting severity of DM are selected in initial stage of CRV. We propose the guidelines for reliability of equipments based on CRV. The CCD has been developed on the basis of the corrosion control in refinery industry. It also improves the safety of refinery process and reduces the cost of corrosion greatly.

Gravimetric Terrain Correction using Triangular Element Method (삼각요소법을 이용한 중력자료의 지형보정)

  • Rim, Hyoung-Rea;Lee, Heui-Soon;Park, Young-Sue;Lim, Mu-Taek;Jung, Hyun-Key
    • Geophysics and Geophysical Exploration
    • /
    • v.13 no.2
    • /
    • pp.169-174
    • /
    • 2010
  • We developed a precise terrain correction program using triangular element method (TEM) for microgravity data processing. TEM calculates gravity attraction of arbitrary polyhedra whose surface is patched by triangles. We showed that TEM can calculate more precise terrain effect than conventional rectangular prism method. We tested the accuracy of TEM on the cone model which has analytic solution. Also, we tested the accuracy of TEM on the slope model, this results showed that there are big differences calculated by TEM and rectangular prsim method (RPM) on slope model. The developed terrain correction program was applied on the gravity data on the southern area near sea shore of Korean peninsula, calculated terrain effect very precisely.

GIS/GPS based Precision Agriculture Model in India -A Case study

  • Mudda, Suresh Kumar
    • Agribusiness and Information Management
    • /
    • v.10 no.2
    • /
    • pp.1-7
    • /
    • 2018
  • In the present day context of changing information needs of the farmers and diversified production systems there is an urgent need to look for the effective extension support system for the small and marginal farmers in the developing countries like India. The rapid developments in the collection and analysis of field data by using the spatial technologies like GPS&GIS were made available for the extension functionaries and clientele for the diversified information needs. This article describes the GIS and GPS based decision support system in precision agriculture for the resource poor farmers. Precision farming techniques are employed to increase yield, reduce production costs, and minimize negative impacts to the environment. The parameters those can affect the crop yields, anomalous factors and variations in management practices can be evaluated through this GPS and GIS based applications. The spatial visualisation capabilities of GIS technology interfaced with a relational database provide an effective method for analysing and displaying the impacts of Extension education and outreach projects for small and marginal farmers in precision agriculture. This approach mainly benefits from the emergence and convergence of several technologies, including the Global Positioning System (GPS), geographic information system (GIS), miniaturised computer components, automatic control, in-field and remote sensing, mobile computing, advanced information processing, and telecommunications. The PPP convergence of person (farmer), project (the operational field) and pixel (the digital images related to the field and the crop grown in the field) will better be addressed by this decision support model. So the convergence and emergence of such information will further pave the way for categorisation and grouping of the production systems for the better extension delivery. In a big country like India where the farmers and holdings are many in number and diversified categorically such grouping is inevitable and also economical. With this premise an attempt has been made to develop a precision farming model suitable for the developing countries like India.

An effective edge detection method for noise images based on linear model and standard deviation (선형모형과 표준편차에 기반한 잡음영상에 효과적인 에지 검출 방법)

  • Park, Youngho
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.6
    • /
    • pp.813-821
    • /
    • 2020
  • Recently, research using unstructured data such as images and videos has been actively conducted in various fields. Edge detection is one of the most useful image enhancement techniques to improve the quality of the image process. However, it is very difficult to perform edge detection in noise images because the edges and noise having high frequency components. This paper uses a linear model and standard deviation as an effective edge detection method for noise images. The edge is detected by the difference between the standard deviation of the pixels included in the pixel block and the standard deviation of the residual obtained by fitting the linear model. The results of edge detection are compared with the results of the Sobel edge detector. In the original image, the Sobel edge detection result and the proposed edge detection result are similar. Proposed method was confirmed that the edge with reduced noise was detected in the various levels of noise images.

A study on the method of measuring the usefulness of De-Identified Information using Personal Information

  • Kim, Dong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.11-21
    • /
    • 2022
  • Although interest in de-identification measures for the safe use of personal information is growing at home and abroad, cases where de-identified information is re-identified through insufficient de-identification measures and inferences are occurring. In order to compensate for these problems and discover new technologies for de-identification measures, competitions to compete on the safety and usefulness of de-identified information are being held in Korea and Japan. This paper analyzes the safety and usefulness indicators used in these competitions, and proposes and verifies new indicators that can measure usefulness more efficiently. Although it was not possible to verify through a large population due to a significant shortage of experts in the fields of mathematics and statistics in the field of de-identification processing, very positive results could be derived for the necessity and validity of new indicators. In order to safely utilize the vast amount of public data in Korea as de-identified information, research on these usefulness metrics should be continuously conducted, and it is expected that more active research will proceed starting with this thesis.

A Study on the Security Threat Response in Smart Integrated Platforms (스마트 통합플랫폼 보안위협과 대응방안 연구)

  • Seung Jae Yoo
    • Convergence Security Journal
    • /
    • v.22 no.1
    • /
    • pp.129-134
    • /
    • 2022
  • A smart platform is defined as an evolved platform that realizes physical and virtual space into a hyper-connected environment by combining the existing platform and advanced IT technology. The hyper-connection that is the connection between information and information, infrastructure and infrastructure, infrastructure and information, or space and service, enables the realization and provision of high-quality services that significantly change the quality of life and environment of users. In addition, it is providing everyone with the effect of significantly improving the social safety net and personal health management level by implementing smart government and smart healthcare. A lot of information produced and consumed in these processes can act as a factor threatening the basic rights of the public and individuals by the informations themselves or through big data analysis. In particular, as the smart platform as a core function that forms the ecosystem of a smart city is naturally and continuously expanded, it faces a huge security burden in data processing and network operation. In this paper, platform components as core functions of smart city and appropriate security threats and countermeasures are studied.