• Title/Summary/Keyword: 데이터의표현방법

Search Result 1,964, Processing Time 0.031 seconds

Effects of Design Innovations on Small and Medium Enterprises' International Competitiveness (디자인혁신이 중소기업의 국제경쟁력에 미치는 영향)

  • Lee Soo-Bong
    • Archives of design research
    • /
    • v.19 no.4 s.66
    • /
    • pp.163-174
    • /
    • 2006
  • The purpose this study is to discuss effects of product design innovations on small and medium enterprises' business accomplishments and further on raising those enterprises' international competitiveness through reviewing previous studies that quantitatively analyzed economic and technological performance and ripple effects of products developed through design innovations. To determine how much design innovations are influential and contributing to small and medium enterprises' international competitiveness, then, the researcher took most advantage of statistical data from quantitative analyses of business accomplishments brought by design innovation development and investment, or economic effects like sales and exports increase. Results of the study can be summarized as follows. First, product design innovations by small and medium enterprises directly contribute to creating plenty of technological and economical achievements, for example, improved product quality, increased product profitability, the effect of product differentiation, improved price competitiveness and increased business sales and exports. Second, technological and economic achievements brought by product design innovations can directly lead to ripple effects like accumulating related knowledge and know-hows, strengthening the competitiveness of products, improving corporate image, increasing business sales and net profit, and meeting many different consumer requirements. Third, technological and economic achievements and ripple effects brought by product design innovations all become very important factors and sources on which small and medium enterprises strengthen their international competitiveness in world markets and maintain their sustainable competitive advantage. Fourth, business accomplishments or economic effects brought by design innovations can be quantitatively measured and analyzed with statistical data. Additional data from the moves can help understand and express the very value or nature of design in a quantitative way. This study is significant in that its results was made based on statistical data from empirical, objective measurements and quantification. The researcher hopes that the study contributes to promoting design innovations by small and medium enterprises and helps CEOs of those businesses better understand the very value and nature of design.

  • PDF

Probability-based Pre-fetching Method for Multi-level Abstracted Data in Web GIS (웹 지리정보시스템에서 다단계 추상화 데이터의 확률기반 프리페칭 기법)

  • 황병연;박연원;김유성
    • Spatial Information Research
    • /
    • v.11 no.3
    • /
    • pp.261-274
    • /
    • 2003
  • The effective probability-based tile pre-fetching algorithm and the collaborative cache replacement algorithm are able to reduce the response time for user's requests by transferring tiles which will be used in advance and determining tiles which should be removed from the restrictive cache space of a client based on the future access probabilities in Web GISs(Geographical Information Systems). The Web GISs have multi-level abstracted data for the quick response time when zoom-in and zoom-out queries are requested. But, the previous pre-fetching algorithm is applied on only two-dimensional pre-fetching space, and doesn't consider expanded pre-fetching space for multi-level abstracted data in Web GISs. In this thesis, a probability-based pre-fetching algorithm for multi-level abstracted in Web GISs was proposed. This algorithm expanded the previous two-dimensional pre-fetching space into three-dimensional one for pre-fetching tiles of the upper levels or lower levels. Moreover, we evaluated the effect of the proposed pre-fetching algorithm by using a simulation method. Through the experimental results, the response time for user requests was improved 1.8%∼21.6% on the average. Consequently, in Web GISs with multi-level abstracted data, the proposed pre-fetching algorithm and the collaborative cache replacement algorithm can reduce the response time for user requests substantially.

  • PDF

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.

An Embedded Watermark into Multiple Lower Bitplanes of Digital Image (디지털 영상의 다중 하위 비트플랜에 삽입되는 워터마크)

  • Rhee, Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.6 s.312
    • /
    • pp.101-109
    • /
    • 2006
  • Recently, according to the number of internet in widely use and the development of the related application program, the distribution and use of multimedia content(text, images, video, audio etc.) is very easy. Digital signal may be easily duplicated and the duplicated data can have same quality of original data so that it is difficult to warrant original owner. For the solution of this problem, the protection method of copyright which is encipher and watermarking. Digital watermarking is used to protect IP(Intellectual Property) and authenticate the owner of multimedia content. In this paper, the proposed watermarking algerian embeds watermark into multiple lower bitplanes of digital image. In the proposed algorithm, original and watermark images are decomposed to bitplanes each other and the watermarking operation is executed in the corresponded bitplane. The position of watermark image embedded in each bitplane is used to the watermarking key and executed in multiple lower bitplane which has no an influence on human visual recognition. Thus this algorithm can present watermark image to the multiple inherent patterns and needs small watermarking quantity. In the experiment, the author confirmed that it has high robustness against attacks of JPEG, MEDIAN and PSNR but it is weakness against attacks of NOISE, RNDDIST, ROT, SCALE, SS on spatial domain when a criterion PSNR of watermarked image is 40dB.

Functional Analysis of Expressed Sequence Tags from Hanwoo (Korean Cattle) cDNA Libraries (한우 cDNA 라이브러리에서 발현된 ESTs의 기능분석)

  • Lim, Da-Jeong;Byun, Mi-Jeong;Cho, Yong-Min;Yoon, Du-Hak;Lee, Seung-Hwan;Shin, Youn-Hee;Im, Seok-Ki
    • Journal of Animal Science and Technology
    • /
    • v.51 no.1
    • /
    • pp.1-8
    • /
    • 2009
  • We generated 57,598 expressed sequence tags (ESTs) from 3 cDNA libraries of Hanwooo (Korean Cattle), fat, loin, liver. Liver, intermuscular fat and longissimus dorsi tissues were obtained from a 24-month-old Hanwoo steer immediately after slaughter. cDNA library was constructed according to the oligocapped method. The EST data were clustered and assembled into unique sequences, 4,759 contigs and 7,587 singletons. To carry out functional analysis, Gene Ontology annotation and identification of significant leaf nodes were performed that were detected by searching significant p-values from $2^{nd}$ level GO terms to leaf nodes using Bonferroni correction. We found that 13, 26 and 8 significant leaf nodes are unique in the transcripts according to 3 GO categories, molecular function, biological process and cellular component. Also digital gene expression profiling using the Audic's test was performed and tissue specific genes were detected in the above 3 libraries.

The Application of Geospatial Information Acquisition Technique and Civil-BIM for Site Selection (지형공간정보취득기술과 토목BIM을 활용한 부지선정 연구)

  • Moon, Su-Jung;Pyeon, Mu-Wook;Park, Hong-Gi;Ji, Jang-Hun;Jo, Jun-Ho
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.6
    • /
    • pp.579-586
    • /
    • 2010
  • Due to the recent development of measuring technology and 3D programs, it has become possible to obtain various spatial data. This study utilizes the 2-dimensional data and 3-dimensional data extraction technology based on the existing empirical and statistical DB. The data obtained from geospatial data technology are integrated with civil engineering BIM to conduct the modeling of the topography of the target region and select the optimum location condition by using the cut and fill balance of the volume of earth. The target area is the land around Tamjin River, Jangheong-gun, Jeolla-do. The 3-dimensional topology linked with 3-dimensional mapping technology by using the orth-image and aerial LiDAR that uses aerial photo of the target area is visualized with Civil3D of AutoDesk. By using Civil3D program, the Thanks to the recent development of measuring technology and 3D programs, target area is analyzed through visualization and related data can be obtained for analysis. The method of using civil engineering BIM enables to obtain various and accurate information about the target area which is helpful for addressing the issues risen from the existing methodology. In this regard, it aims at searching for the alternatives and provides suggestions to utilize the information.

VILODE : A Real-Time Visual Loop Closure Detector Using Key Frames and Bag of Words (VILODE : 키 프레임 영상과 시각 단어들을 이용한 실시간 시각 루프 결합 탐지기)

  • Kim, Hyesuk;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.5
    • /
    • pp.225-230
    • /
    • 2015
  • In this paper, we propose an effective real-time visual loop closure detector, VILODE, which makes use of key frames and bag of visual words (BoW) based on SURF feature points. In order to determine whether the camera has re-visited one of the previously visited places, a loop closure detector has to compare an incoming new image with all previous images collected at every visited place. As the camera passes through new places or locations, the amount of images to be compared continues growing. For this reason, it is difficult for a visual loop closure detector to meet both real-time constraint and high detection accuracy. To address the problem, the proposed system adopts an effective key frame selection strategy which selects and compares only distinct meaningful ones from continuously incoming images during navigation, and so it can reduce greatly image comparisons for loop detection. Moreover, in order to improve detection accuracy and efficiency, the system represents each key frame image as a bag of visual words, and maintains indexes for them using DBoW database system. The experiments with TUM benchmark datasets demonstrates high performance of the proposed visual loop closure detector.

A Scalable OWL Horst Lite Ontology Reasoning Approach based on Distributed Cluster Memories (분산 클러스터 메모리 기반 대용량 OWL Horst Lite 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.3
    • /
    • pp.307-319
    • /
    • 2015
  • Current ontology studies use the Hadoop distributed storage framework to perform map-reduce algorithm-based reasoning for scalable ontologies. In this paper, however, we propose a novel approach for scalable Web Ontology Language (OWL) Horst Lite ontology reasoning, based on distributed cluster memories. Rule-based reasoning, which is frequently used for scalable ontologies, iteratively executes triple-format ontology rules, until the inferred data no longer exists. Therefore, when the scalable ontology reasoning is performed on computer hard drives, the ontology reasoner suffers from performance limitations. In order to overcome this drawback, we propose an approach that loads the ontologies into distributed cluster memories, using Spark (a memory-based distributed computing framework), which executes the ontology reasoning. In order to implement an appropriate OWL Horst Lite ontology reasoning system on Spark, our method divides the scalable ontologies into blocks, loads each block into the cluster nodes, and subsequently handles the data in the distributed memories. We used the Lehigh University Benchmark, which is used to evaluate ontology inference and search speed, to experimentally evaluate the methods suggested in this paper, which we applied to LUBM8000 (1.1 billion triples, 155 gigabytes). When compared with WebPIE, a representative mapreduce algorithm-based scalable ontology reasoner, the proposed approach showed a throughput improvement of 320% (62k/s) over WebPIE (19k/s).

Analysis of the Spatial Distribution of Total Phosphorus in Wetland Soils Using Geostatistics (지구통계학을 이용한 습지 토양 중 총인의 공간분포 분석)

  • Kim, Jongsung;Lee, Jungwoo
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.38 no.10
    • /
    • pp.551-557
    • /
    • 2016
  • Fusing satellite images and site-specific observations have potential to improve a predictive quality of environmental properties. However, the effect of the utilization of satellite images to predict soil properties in a wetland is still poorly understood. For the reason, block kriging and regression kriging were applied to a natural wetland, Water Conservation Area-2A in Florida, to compare the accuracy improvement of continuous models predicting total phosphorus in soils. Field observations were used to develop the soil total phosphorus prediction models. Additionally, the spectral data and derived indices from Landsat ETM+, which has 30 m spatial resolution, were used as independent variables for the regression kriging model. The block kriging model showed $R^2$ of 0.59 and the regression kriging model showed $R^2$ of 0.49. Although the block kriging performed better than the regession kriging, both models showed similar spatial patterns. Moreover, regression kriging utilizing a Landsat ETM+ image facilitated to capture unique and complex landscape features of the study area.