• Title/Summary/Keyword: Software Product

Search Result 1,080, Processing Time 0.031 seconds

Study on Temperature Distribution in Cold Storage of Korean Garlic in Wire Mesh Pallet Container Using CFD Analysis (CFD 해석을 이용한 철망 파렛트 컨테이너 적입 마늘의 저온 저장고내 온도 분포 연구)

  • Dong-Soo Choi;Yong-Hoon Kim;Jin-Se Kim;Chun-Wan Park;Hyun-Mo Jung;Jong-Min Park
    • KOREAN JOURNAL OF PACKAGING SCIENCE & TECHNOLOGY
    • /
    • v.29 no.3
    • /
    • pp.195-201
    • /
    • 2023
  • Garlic (Allium sativum)is a major crop in most Asian countries, and its consumption in Asia-Pacific countries exceeds 90% of the global consumption. It contains beneficial ingredients and numerous essential nutrients, such as manganese, vitamin B6, and vitamin B1. Garlic demand is rising not only in Asian countries but also around the world. Particularly, garlic demand has been steadily increasing in European countries, such as Spain, France, Italy, and the American continent. In South Korea, 331,671 tons and 387,671 tons of garlic was produced in 2018 and 2019, respectively, making the country the fifth ranking garlic producer in the world, and the production has been increasing every year. In this study, the study on temperature distribution in cold storage of Korean garlic in folding wire mesh pallet container using CFD (Computational Fluid Dynamics) analysis was performed and Computations were based a commercial simulation software (ANSYS Workbenh Ver. 18.0). Considering the respiration heat of garlic, the decreasing rate of temperature in the area in contact with the cold air was fast due to the inflow of cold air inside, while the decreasing rate of temperature in the center of the pallet was very low. In order to maintain a uniform temperature distribution inside the agricultural product storage pallet in a low-temperature warehouse, it is considered desirable to install an air passageway to allow low-temperature air to flow into the wire mesh pallet.

Development of an Information System for Accounting for the Level of Training of Future Specialists in the Field of Information Technology

  • Alla Kapiton;Nataliia Kononets;Valeriy Zhamardiy;Lesya Petrenko;Nadiya Kravtsova;Tetiana Blahova
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.1
    • /
    • pp.95-106
    • /
    • 2024
  • The article is devoted to the design and development of an information system for preserving the results of testing to verify the residual knowledge of students of the resource for training specialists in information and communication technologies. The purpose of the study is to provide a scientific justification for the problem of developing professional training of specialists in information and communication technologies in the process of using an information system to save test results to verify students' residual knowledge and to verify the effectiveness of its implementation in universities. According to the results of the experiment, it can be argued that the introduction of an information system to preserve the results of testing to test students' residual knowledge in the educational process contributes to the professional training of specialists in information and communication technologies at the universities of Ukraine. The practice of development and use of modern information technologies focused on the implementation of psychological and pedagogical goals of teaching and education is fundamentally new mediated by modern technical and technological innovations.

Enhanced Machine Learning Preprocessing Techniques for Optimization of Semiconductor Process Data in Smart Factories (스마트 팩토리 반도체 공정 데이터 최적화를 위한 향상된 머신러닝 전처리 방법 연구)

  • Seung-Gyu Choi;Seung-Jae Lee;Choon-Sung Nam
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.4
    • /
    • pp.57-64
    • /
    • 2024
  • The introduction of Smart Factories has transformed manufacturing towards more objective and efficient line management. However, most companies are not effectively utilizing the vast amount of sensor data collected every second. This study aims to use this data to predict product quality and manage production processes efficiently. Due to security issues, specific sensor data could not be verified, so semiconductor process-related training data from the "SAMSUNG SDS Brightics AI" site was used. Data preprocessing, including removing missing values, outliers, scaling, and feature elimination, was crucial for optimal sensor data. Oversampling was used to balance the imbalanced training dataset. The SVM (rbf) model achieved high performance (Accuracy: 97.07%, GM: 96.61%), surpassing the MLP model implemented by "SAMSUNG SDS Brightics AI". This research can be applied to various topics, such as predicting component lifecycles and process conditions.

An investigation of the User Research Techniques in the User-Centered Design Framework - Focused on the on-line community services development for 13-18 Young Adults (사용자 중심 디자인 프레임워크에서 사용자 조사기법의 역할에 관한 연구 - 13-18 청소년용 온라인 커뮤니티 컨텐트 개발 프로젝트를 중심으로)

  • 이종호
    • Archives of design research
    • /
    • v.17 no.2
    • /
    • pp.77-86
    • /
    • 2004
  • User-Centered Design Approach plays important role in dealing with usability issues for developing modern technology products. Yet it is still questionable whether the User-Centered approach is enough for the development of successful consumer contents since the User-Centered Design is originated from the software engineering field where meeting customers' functional requirement is the most critical aspect in developing a software. However, modern consumer market is already saturated and in order to meet ever increasing consumer requirements, the User-Centered Design approach needs to be expanded. As a way of incorporating the User-Centered Approach into the consumer product development, Jordan suggested the 'Pleasure-based Approach' in industrial design field, which usually generates multi-dimensional user requirements: 1)physical, 2)cognitive, 3)identity and 4) social. It is the current tendency that many portal and community service providers focus on fulfilling both functional and emotional needs for users when developing new items, contents and services. Previously fulfilling consumers' emotional needs solely depend on visual designer's graphical sense and capability. However, taking the customer-centered approach on withdrawing consumers' unknown needs is getting critical in the competitive market environment. This paper reviews different types of user research techniques and categorized into 6 ways based on Kano(1992)'s product quality model. Based on his theory, only performance factors, such as suability, can be identified through the user-centered design approach. The user-centered design approach has to be expanded to include factors include personality, sociability, pleasure, and so on. In order to identify performance as well as excellent factors through user research, a user-research framework was established and tested through the case study, which is ' the development of new online service for teens '. The results of the user research were summarized at the end of the paper and the pros and cons of each research techniques were analyzed.

  • PDF

From a Defecation Alert System to a Smart Bottle: Understanding Lean Startup Methodology from the Case of Startup "L" (배변알리미에서 스마트바틀 출시까지: 스타트업 L사 사례로 본 린 스타트업 실천방안)

  • Sunkyung Park;Ju-Young Park
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.5
    • /
    • pp.91-107
    • /
    • 2023
  • Lean startup is a concept that combines the words "lean," meaning an efficient way of running a business, and "startup," meaning a new business. It is often cited as a strategy for minimizing failure in early-stage businesses, especially in software-based startups. By scrutinizing the case of a startup L, this study suggests that lean startup methodology(LSM) can be useful for hardware and manufacturing companies and identifies ways for early startups to successfully implement LSM. To this end, the study explained the core of LSM including the concepts of hypothesis-driven approach, BML feedback loop, minimum viable product(MVP), and pivot. Five criteria to evaluate the successful implementation of LSM were derived from the core concepts and applied to evaluate the case of startup L . The early startup L pivoted its main business model from defecation alert system for patients with limited mobility to one for infants or toddlers, and finally to a smart bottle for infants. In developing the former two products, analyzed from LSM's perspective, company L neither established a specific customer value proposition for its startup idea and nor verified it through MVP experiment, thus failed to create a BML feedback loop. However, through two rounds of pivots, startup L discovered new target customers and customer needs, and was able to establish a successful business model by repeatedly experimenting with MVPs with minimal effort and time. In other words, Company L's case shows that it is essential to go through the customer-market validation stage at the beginning of the business, and that it should be done through an MVP method that does not waste the startup's time and resources. It also shows that it is necessary to abandon and pivot a product or service that customers do not want, even if it is technically superior and functionally complete. Lastly, the study proves that the lean startup methodology is not limited to the software industry, but can also be applied to technology-based hardware industry. The findings of this study can be used as guidelines and methodologies for early-stage companies to minimize failures and to accelerate the process of establishing a business model, scaling up, and going global.

  • PDF

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

Image Watermarking for Copyright Protection of Images on Shopping Mall (쇼핑몰 이미지 저작권보호를 위한 영상 워터마킹)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.147-157
    • /
    • 2013
  • With the advent of the digital environment that can be accessed anytime, anywhere with the introduction of high-speed network, the free distribution and use of digital content were made possible. Ironically this environment is raising a variety of copyright infringement, and product images used in the online shopping mall are pirated frequently. There are many controversial issues whether shopping mall images are creative works or not. According to Supreme Court's decision in 2001, to ad pictures taken with ham products is simply a clone of the appearance of objects to deliver nothing but the decision was not only creative expression. But for the photographer's losses recognized in the advertising photo shoot takes the typical cost was estimated damages. According to Seoul District Court precedents in 2003, if there are the photographer's personality and creativity in the selection of the subject, the composition of the set, the direction and amount of light control, set the angle of the camera, shutter speed, shutter chance, other shooting methods for capturing, developing and printing process, the works should be protected by copyright law by the Court's sentence. In order to receive copyright protection of the shopping mall images by the law, it is simply not to convey the status of the product, the photographer's personality and creativity can be recognized that it requires effort. Accordingly, the cost of making the mall image increases, and the necessity for copyright protection becomes higher. The product images of the online shopping mall have a very unique configuration unlike the general pictures such as portraits and landscape photos and, therefore, the general image watermarking technique can not satisfy the requirements of the image watermarking. Because background of product images commonly used in shopping malls is white or black, or gray scale (gradient) color, it is difficult to utilize the space to embed a watermark and the area is very sensitive even a slight change. In this paper, the characteristics of images used in shopping malls are analyzed and a watermarking technology which is suitable to the shopping mall images is proposed. The proposed image watermarking technology divide a product image into smaller blocks, and the corresponding blocks are transformed by DCT (Discrete Cosine Transform), and then the watermark information was inserted into images using quantization of DCT coefficients. Because uniform treatment of the DCT coefficients for quantization cause visual blocking artifacts, the proposed algorithm used weighted mask which quantizes finely the coefficients located block boundaries and coarsely the coefficients located center area of the block. This mask improves subjective visual quality as well as the objective quality of the images. In addition, in order to improve the safety of the algorithm, the blocks which is embedded the watermark are randomly selected and the turbo code is used to reduce the BER when extracting the watermark. The PSNR(Peak Signal to Noise Ratio) of the shopping mall image watermarked by the proposed algorithm is 40.7~48.5[dB] and BER(Bit Error Rate) after JPEG with QF = 70 is 0. This means the watermarked image is high quality and the algorithm is robust to JPEG compression that is used generally at the online shopping malls. Also, for 40% change in size and 40 degrees of rotation, the BER is 0. In general, the shopping malls are used compressed images with QF which is higher than 90. Because the pirated image is used to replicate from original image, the proposed algorithm can identify the copyright infringement in the most cases. As shown the experimental results, the proposed algorithm is suitable to the shopping mall images with simple background. However, the future study should be carried out to enhance the robustness of the proposed algorithm because the robustness loss is occurred after mask process.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Development of Topic Trend Analysis Model for Industrial Intelligence using Public Data (텍스트마이닝을 활용한 공개데이터 기반 기업 및 산업 토픽추이분석 모델 제안)

  • Park, Sunyoung;Lee, Gene Moo;Kim, You-Eil;Seo, Jinny
    • Journal of Technology Innovation
    • /
    • v.26 no.4
    • /
    • pp.199-232
    • /
    • 2018
  • There are increasing needs for understanding and fathoming of business management environment through big data analysis at industrial and corporative level. The research using the company disclosure information, which is comprehensively covering the business performance and the future plan of the company, is getting attention. However, there is limited research on developing applicable analytical models leveraging such corporate disclosure data due to its unstructured nature. This study proposes a text-mining-based analytical model for industrial and firm level analyses using publicly available company disclousre data. Specifically, we apply LDA topic model and word2vec word embedding model on the U.S. SEC data from the publicly listed firms and analyze the trends of business topics at the industrial and corporate levels. Using LDA topic modeling based on SEC EDGAR 10-K document, whole industrial management topics are figured out. For comparison of different pattern of industries' topic trend, software and hardware industries are compared in recent 20 years. Also, the changes of management subject at firm level are observed with comparison of two companies in software industry. The changes of topic trends provides lens for identifying decreasing and growing management subjects at industrial and firm level. Mapping companies and products(or services) based on dimension reduction after using word2vec word embedding model and principal component analysis of 10-K document at firm level in software industry, companies and products(services) that have similar management subjects are identified and also their changes in decades. For suggesting methodology to develop analysis model based on public management data at industrial and corporate level, there may be contributions in terms of making ground of practical methodology to identifying changes of managements subjects. However, there are required further researches to provide microscopic analytical model with regard to relation of technology management strategy between management performance in case of related to various pattern of management topics as of frequent changes of management subject or their momentum. Also more studies are needed for developing competitive context analysis model with product(service)-portfolios between firms.

Development of Drawing & Specification Management System Using 3D Object-based Product Model (3차원 객체기반 모델을 이용한 설계도면 및 시방서관리 시스템 구축)

  • Kim Hyun-nam;Wang Il-kook;Chin Sang-yoon
    • Korean Journal of Construction Engineering and Management
    • /
    • v.1 no.3 s.3
    • /
    • pp.124-134
    • /
    • 2000
  • In construction projects, the design information, which should contain accurate product information in a systematic way, needs to be applicable through the life-cycle of projects. However, paper-based 2D drawings and relevant documents has difficulties in communicating and sharing the owner's and architect's intention and requirement effectively and building a corporate knowledge base through on-going projects due to Tack of interoperability between specific task or function-oriented software and handling massive information. Meanwhile, computer and information technologies are being developed so rapidly that the practitioners are even hard to adapt them into the industry efficiently. 3D modeling capabilities in CAD systems are enormously developed and enables users to associate 3D models with other relevant information. However, this still requires a great deal of efforts and costs to have all the design information represented in CAD system, and the sophisticated system is difficult to manage. This research focuses on the transition period from 2D-based design Information management to 3D-based, which means co-existence of 2D and 3D-based management. This research proposes a model of a compound system of 2D and 3D-based CAD system which presents the general design information using 3D model integrating with 2D CAD drawings for detailed design information. This research developed an integrated information management system for design and specification by associating 2D drawings and 3D models, where 2D drawings represents detailed design and parts that are hard to express in 3D objects. To do this, related management processes was analyzed to build an information model which in turn became the basis of the integrated information management system.

  • PDF