• Title/Summary/Keyword: 파스

Search Result 246, Processing Time 0.024 seconds

Global Entrepreneurial Strategy of Korean Cuisine for Advancing into US Dine out Market (미국외식시장에서의 한식 글로벌 창업전략)

  • Park, Jaewhan;Kim, Jae Hong
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.11 no.3
    • /
    • pp.169-176
    • /
    • 2016
  • Our Korean cuisine, due to growing interests in Korean culture along with outstanding performance of K-pop stars, is attracting worldly attention. As the worldly food pursuit tendency is changing from"fast food" to "slow food", preference for our Korean cuisine, which is well-known for its healthiness, is growing. However, our Korean cuisine, in terms of the world citizen's preference, as receiving evaluation for being lacking behind of Sushi of Japan, Dimsum of China, pizza and pasta of Italy, rice noodle of Vietnam, even to Indonesian and Middle-East foods, has not been achieving drastic advancements despite the cosmopolitan's attention. The previous studies were suggesting that, failure of a localization strategy that changes our traditional taste and aroma adaptive to foreigners' preference, is a cause for this. This study, through case studies of Korean food businesses in the US which have achieved a success through localization strategy, attempts to propose the following global entrepreneurial strategy of Koran food at the US dining out market. As a global entrepreneurial strategy for success, we propose, first a sales strategy not for Koreans but for local people as main customers, second a customization strategy which is not our traditional way but that meets local standard, and finally a committed entrepreneurship.

  • PDF

A Quantitative Analysis of Classification Classes and Classified Information Resources of Directory (디렉터리 서비스 분류항목 및 정보자원의 계량적 분석)

  • Kim, Sung-Won
    • Journal of Information Management
    • /
    • v.37 no.1
    • /
    • pp.83-103
    • /
    • 2006
  • This study analyzes the classification schemes and classified information resources of the directory services provided by major web portals to complement keyword-based retrieval. Specifically, this study intends to quantitatively analyze the topic categories, the information resources by subject, and the information resources classified by the topic categories of three directories, Yahoo, Naver, and Empas. The result of this analysis reveals some differences among directory services. Overall, these directories show different ratios of referred categories to original categories depending on the subject area, and the categories regarded as format-based show the highest proportion of referred categories. In terms of the total amount of classified information resources, Yahoo has the largest number of resources. The directories compared have different amounts of resources depending on the subject area. The quantitative analysis of resources classified by the specific category is performed on the class of 'News & Media'. The result reveals that Naver and Empas contain overly specified categories compared to Yahoo, as far as the number of information resources categorized is concerned. Comparing the depth of the categories assigned by the three directories to the same information resources, it is found that, on average, Yahoo assigns one-step further segmented divisions than the other two directories to the identical resources.

The Selective p-Distribution for Adaptive Refinement of L-Shaped Plates Subiected to Bending (휨을 받는 L-형 평판의 적응적 세분화를 위한 선택적 p-분배)

  • Woo, Kwang-Sung;Jo, Jun-Hyung;Lee, Seung-Joon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.20 no.5
    • /
    • pp.533-541
    • /
    • 2007
  • The Zienkiewicz-Zhu(Z/Z) error estimate is slightly modified for the hierarchical p-refinement, and is then applied to L-shaped plates subjected to bending to demonstrate its effectiveness. An adaptive procedure in finite element analysis is presented by p-refinement of meshes in conjunction with a posteriori error estimator that is based on the superconvergent patch recovery(SPR) technique. The modified Z/Z error estimate p-refinement is different from the conventional approach because the high order shape functions based on integrals of Legendre polynomials are used to interpolate displacements within an element, on the other hand, the same order of basis function based on Pascal's triangle tree is also used to interpolate recovered stresses. The least-square method is used to fit a polynomial to the stresses computed at the sampling points. The strategy of finding a nearly optimal distribution of polynomial degrees on a fixed finite element mesh is discussed such that a particular element has to be refined automatically to obtain an acceptable level of accuracy by increasing p-levels non-uniformly or selectively. It is noted that the error decreases rapidly with an increase in the number of degrees of freedom and the sequences of p-distributions obtained by the proposed error indicator closely follow the optimal trajectory.

An Analysis of Bed Change Characteristics by Bed Protection Work (바닥보호공 설치에 따른 하상변동 특성 분석)

  • Son, Ah Long;Kim, Byung Hyun;Moon, Bo Ram;Han, Kun Yeun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.35 no.4
    • /
    • pp.821-834
    • /
    • 2015
  • This study presents the analysis of flow and bed change characteristics considering bed protection work built on the immediate downstream of weir to protect river bed from scouring. The study area is 37km reach from Hyunpoong station to Masuwon station including Hapcheon- Changryoung multi-function weir in the Nakdong river. CCHE2D model is calibrated and validated for evaluating the flow and bed change characteristics during Typhoon Kompasu in 2010. Three simulation conditions are set up: Case 1 is a natural channel without installation of weir. Case 2 involves an installation of weir in the natural channel. Case 3 involves an installation of weir with bed protection in the natural channel. Flood frequency (50, 100 and 200yr) is applied to each scenario to analyze the effects of bed protection work. While the sediment rate is increased in the downstream of fixed gate and sluice-type gate, river bed scouring rate is increased in the downstream of lift-type gate in Case 2 comparing with the results of Case 1. The river bed scouring is not occurred in the immediate downstream of weir (~30m) due to the effect of bed protection, but larger amount of sediment is occurred in the downstream of weir (60m~) which the bed protection is not installed comparing with the results Case 1. Through the results of simulation considering bed protection work, this study would be helpful to expect bed change and operate the weir as well as manage.

An Analysis of Success Factors in Internet Shopping Malls (인터넷 쇼핑몰구축의 성공요인에 대한 분석)

  • 진영배;권영식
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.12
    • /
    • pp.1495-1504
    • /
    • 2001
  • The main purpose of this research is to show Internet shopping mall the strategic way with the analysis of success factors in Internet shopping malls. To achieve the above purpose, the success factors and variables were defined by the survey of the reference. Sample malls for the research un selected of shopping malls registered in yahoo, Lycos, Empas and Hanmir regardless of their type and class, and did an online-survey of their operation. From the above method, the following results are deduced. First, there are five factors in the success of Internet shopping malls: the effectiveness of customer management the effectiveness of marketing. the competitiveness of product-sales, the convenience of use, the credibility of product. Second, the effectiveness of marketing is positively related to the number of member, visitors, and sales. Third, the credibility of product is negatively related to the number of member, visitors, and sales. At the end, the number of member and visitor are positively related to sales. This result could provide the managers with highly relevant managerial issues. The implication of the study are discussed and futher research directions are proposed.

  • PDF

Growth and Anthocyanin Content of Lettuce as Affected by Artificial Light Source and Photoperiod in a Closed-type Plant Production System (밀폐형 식물생산시스템에서 인공광원과 광조사 시간에 따른 상추의 생장 및 안토시아닌 함량)

  • Park, Ji Eun;Park, Yoo Gyeong;Jeong, Byoung Ryong;Hwang, Seung Jae
    • Horticultural Science & Technology
    • /
    • v.30 no.6
    • /
    • pp.673-679
    • /
    • 2012
  • This study was conducted to examine the effect of artificial light source and photoperiod on the growth of leaf lettuce (Lactuca sativa L.) 'Seonhong Jeokchukmyeon' in a closed-type plant production system. Seedlings were grown under 3 light sources, fluorescent lamp (FL, Philips Co. Ltd., the Netherlands), WL #1 (Hepas Co. Ltd., Korea), and WL #2 (FC Poibe Co., Ltd., Korea), each with 3 photoperiods, 12/12, 18/6, and 24/0 (Light/Dark). An irradiance spectrum analysis showed that FL has various peaks in the 400-700 nm range, while WL #1 and WL #2 have only one monochromatic peak at 450 and 550 nm, respectively. The greatest plant height, fresh and dry weights were obtained in the 24/0 (Light/Dark) photoperiod. The 24/0 (Light/Dark) photoperiod treatment promoted vegetative growth of the leaf area. Length of the longest root, number of leaves, fresh weight, and total anthocyanin contents were greater in FL than in either WL #1 or #2. The greatest chlorophyll fluorescence (Fv/Fm) was found in the 12/12 (Light/Dark) photoperiod with FL treatment. The energy use efficiency of the LED increased by about 35-46% as compared to FL. Results suggest a possibility of LED being used as a substitute light source for fluorescent lamp for lettuce cultivation in a plant factory system.

Analyzing dependency of Korean subordinate clauses using a composit kernel (복합 커널을 사용한 한국어 종속절의 의존관계 분석)

  • Kim, Sang-Soo;Park, Seong-Bae;Park, Se-Young;Lee, Sang-Jo
    • Korean Journal of Cognitive Science
    • /
    • v.19 no.1
    • /
    • pp.1-15
    • /
    • 2008
  • Analyzing of dependency relation among clauses is one of the most critical parts in parsing Korean sentences because it generates severe ambiguities. To get successful results of analyzing dependency relation, this task has been the target of various machine learning methods including SVM. Especially, kernel methods are usually used to analyze dependency relation and it is reported that they show high performance. This paper proposes an expression and a composit kernel for dependency analysis of Korean clauses. The proposed expression adopts a composite kernel to obtain the similarity among clauses. The composite kernel consists of a parse tree kernel and a liner kernel. A parse tree kernel is used for treating structure information and a liner kernel is applied for using lexical information. the proposed expression is defined as three types. One is a expression of layers in clause, another is relation expression between clause and the other is an expression of inner clause. The experiment is processed by two steps that first is a relation expression between clauses and the second is a expression of inner clauses. The experimental results show that the proposed expression achieves 83.31% of accuracy.

  • PDF

Bioequivalence of GLUNATE® Tablet to PASTIC® Tablet (nateglinide 90 mg) (파스틱 정®(나테글리니드 90 mg)에 대한 글루나테 정®의 생물학적 동등성)

  • Tak, Sung-Kwon;Lee, Jin-Sung;Choi, Sang-Joon;Seo, Ji-Hyung;Lee, Myung-Jae;Kang, Jong-Min;Ryu, Ju-Hee;Hong, Seung-Jae;Yim, Sung-Vin;Lee, Kyung-Tae
    • Journal of Pharmaceutical Investigation
    • /
    • v.39 no.2
    • /
    • pp.141-147
    • /
    • 2009
  • The purpose of this study was to evaluate the bioequivalence of two nateglinide tablets, $PASTIC^{(R)}$ tablet (ILDONG Pharm. Co., Ltd., Seoul, Korea, reference drug) and $GLUNATE^{(R)}$ tablet (ILHWA. Co., Ltd., Seoul, Korea, test drug), according to the guidelines of Korea Food and Drug Administration (KFDA). Thirty-five healthy male volunteers, $23.1{\pm}2.3$ years in age and $69.2{\pm}8.8\;kg$ in body weight, were divided into two groups and a randomized $2{\times}2$ cross-over study was employed. After a tablet containing 90 mg of nateglinide was orally administrated, blood was taken at predetermined time intervals over a period of 8 hr and concentrations of nateglinide in plasma were monitored using LC-MS/MS. Pharmacokinetic parameters such as AUCt (the area under the plasma concentration-time curve from time 0 to 8 hr), $C_{max}$ (maximum plasma drug concentration) and $TC_{max}$ (time to reach $CC_{max}$) were calculated and analysis of variance (ANOVA) test was utilized for the statistical analysis of the parameters using logarithmically transformed $AUC_t$ and $C_{max}$ and untransformed $T_{max}$. The 90% confidence intervals of the $AUC_t$ ratio and the $C_{max}$ ratio for $GLUNATE^{(R)}/PASTIC^{(R)}$ were ${\log}1.0782{\sim}{\log}1.1626$ and ${\log}0.9621{\sim}{\log}1.1679$, respectively. Since these values were within the acceptable bioequivalence intervals of ${\log}0.80{\sim}{\log}1.25$, recommended by KFDA, it was concluded that $GLUNATER^{(R)}$ tablet was bioequivalent to $PASTIC^{(R)}$ tablet, in terms of both rate and extent of absorption.

Design and Implementation of a Spatial-Operation-Trigger for Supporting the Integrity of Meet-Spatial-Objects (상접한 공간 객체의 무결성 지원을 위한 공간 연산 트리거의 설계 및 구현)

  • Ahn, Jun-Soon;Cho, Sook-Kyoung;Chung, Bo-Hung;Lee, Jae-Dong;Bae, Hae-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.2
    • /
    • pp.127-140
    • /
    • 2002
  • In a spatial database system, the semantic integrity should be supported for maintaining the data consistency. In the real world, spatial objects In boundary layer should always meet neighbor objects, and they cannot hold the same name. This characteristic is an implied concept in real world. So, when this characteristic is disobeyed due to the update operations of spatial objects, it is necessary to maintain the integrity of a layer. In this thesis, we propose a spatial-operation-trigger for supporting the integrity of spatial objects. The proposed method is defined a spatial-operation-trigger based on SQL-3 and executed when the constraint condition is violated. A spatial-operation-trigger have the strategy of execution. Firstly, for one layer, the spatial and aspatial data triggers are executed respectively. Secondly, the aspatial data trigger for the other layers is executed. Spatial-operation-trigger for one layer checks whether the executed operation updates only spatial data, aspatial data, or both of them, and determines the execution strategy of a spatial-operation-trigger. Finally, the aspatial data trigger for the other layers is executed. A spatial-operation-trigger is executed in three steps for the semantic integrity of the meet-property of spatial objects. And, it provides the semantic integrity of spatial objects and the convenience for users using automatic correcting operation.

Automated Detecting and Tracing for Plagiarized Programs using Gumbel Distribution Model (굼벨 분포 모델을 이용한 표절 프로그램 자동 탐색 및 추적)

  • Ji, Jeong-Hoon;Woo, Gyun;Cho, Hwan-Gue
    • The KIPS Transactions:PartA
    • /
    • v.16A no.6
    • /
    • pp.453-462
    • /
    • 2009
  • Studies on software plagiarism detection, prevention and judgement have become widespread due to the growing of interest and importance for the protection and authentication of software intellectual property. Many previous studies focused on comparing all pairs of submitted codes by using attribute counting, token pattern, program parse tree, and similarity measuring algorithm. It is important to provide a clear-cut model for distinguishing plagiarism and collaboration. This paper proposes a source code clustering algorithm using a probability model on extreme value distribution. First, we propose an asymmetric distance measure pdist($P_a$, $P_b$) to measure the similarity of $P_a$ and $P_b$ Then, we construct the Plagiarism Direction Graph (PDG) for a given program set using pdist($P_a$, $P_b$) as edge weights. And, we transform the PDG into a Gumbel Distance Graph (GDG) model, since we found that the pdist($P_a$, $P_b$) score distribution is similar to a well-known Gumbel distribution. Second, we newly define pseudo-plagiarism which is a sort of virtual plagiarism forced by a very strong functional requirement in the specification. We conducted experiments with 18 groups of programs (more than 700 source codes) collected from the ICPC (International Collegiate Programming Contest) and KOI (Korean Olympiad for Informatics) programming contests. The experiments showed that most plagiarized codes could be detected with high sensitivity and that our algorithm successfully separated real plagiarism from pseudo plagiarism.