• Title/Summary/Keyword: Inverse Logistics

Search Result 6, Processing Time 0.023 seconds

PKCθ-Mediated PDK1 Phosphorylation Enhances T Cell Activation by Increasing PDK1 Stability

  • Kang, Jung-Ah;Choi, Hyunwoo;Yang, Taewoo;Cho, Steve K.;Park, Zee-Yong;Park, Sung-Gyoo
    • Molecules and Cells
    • /
    • v.40 no.1
    • /
    • pp.37-44
    • /
    • 2017
  • PDK1 is essential for T cell receptor (TCR)-mediated activation of $NF-{\kappa}B$, and PDK1-induced phosphorylation of $PKC{\theta}$ is important for TCR-induced $NF-{\kappa}B$ activation. However, inverse regulation of PDK1 by $PKC{\theta}$ during T cell activation has not been investigated. In this study, we found that $PKC{\theta}$ is involved in human PDK1 phosphorylation and that its kinase activity is crucial for human PDK1 phosphorylation. Mass spectrometry analysis of wild-type $PKC{\theta}$ or of kinase-inactive form of $PKC{\theta}$ revealed that $PKC{\theta}$ induced phosphorylation of human PDK1 at Ser-64. This $PKC{\theta}$-induced PDK1 phosphorylation positively regulated T cell activation and TCR-induced $NF-{\kappa}B$ activation. Moreover, phosphorylation of human PDK1 at Ser-64 increased the stability of human PDK1 protein. These results suggest that Ser-64 is an important phosphorylation site that is part of a positive feedback loop for human PDK1-$PKC{\theta}$-mediated T cell activation.

Modeling of Magentic Levitation Logistics Transport System Using Extreme Learning Machine (Extreme Learning Machine을 이용한 자기부상 물류이송시스템 모델링)

  • Lee, Bo-Hoon;Cho, Jae-Hoon;Kim, Yong-Tae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.1
    • /
    • pp.269-275
    • /
    • 2013
  • In this paper, a new modeling method of a magnetic levitation(Maglev) system using extreme learning machine(ELM) is proposed. The linearized methods using Taylor Series expansion has been used for modeling of a Maglev system. However, the numerical method has some drawbacks when dealing with the components with high nonlinearity of a Maglev system. To overcome this problem, we propose a new modeling method of the Maglev system with electro magnetic suspension, which is based on ELM with fast learning time than conventional neural networks. In the proposed method, the initial input weights and hidden biases of the method are usually randomly chosen, and the output weights are analytically determined by using Moore-Penrose generalized inverse. matrix Experimental results show that the proposed method can achieve better performance for modeling of Maglev system than the previous numerical method.

The Balancing of Disassembly Line of Automobile Engine Using Genetic Algorithm (GA) in Fuzzy Environment

  • Seidi, Masoud;Saghari, Saeed
    • Industrial Engineering and Management Systems
    • /
    • v.15 no.4
    • /
    • pp.364-373
    • /
    • 2016
  • Disassembly is one of the important activities in treating with the product at the End of Life time (EOL). Disassembly is defined as a systematic technique in dividing the products into its constituent elements, segments, sub-assemblies, and other groups. We concern with a Fuzzy Disassembly Line Balancing Problem (FDLBP) with multiple objectives in this article that it needs to allocation of disassembly tasks to the ordered group of disassembly Work Stations. Tasks-processing times are fuzzy numbers with triangular membership functions. Four objectives are acquired that include: (1) Minimization of number of disassembly work stations; (2) Minimization of sum of idle time periods from all work stations by ensuring from similar idle time at any work-station; (3) Maximization of preference in removal the hazardous parts at the shortest possible time; and (4) Maximization of preference in removal the high-demand parts before low-demand parts. This suggested model was initially solved by GAMS software and then using Genetic Algorithm (GA) in MATLAB software. This model has been utilized to balance automotive engine disassembly line in fuzzy environment. The fuzzy results derived from two software programs have been compared by ranking technique using mean and fuzzy dispersion with each other. The result of this comparison shows that genetic algorithm and solving it by MATLAB may be assumed as an efficient solution and effective algorithm to solve FDLBP in terms of quality of solution and determination of optimal sequence.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

Development and Exploration of Safety Performance Functions Using Multiple Modeling Techniques : Trumpet Ramps (다양한 통계 기법을 활용한 안전성능함수 개발 및 비교 연구 : 트럼펫형 램프를 중심으로)

  • Yang, Samgyu;Park, Juneyoung;Kwon, Kyeongjoo;Lee, Hyunsuk
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.5
    • /
    • pp.35-44
    • /
    • 2021
  • In recent times, several studies have been conducted focusing on crashes occurring on the main segment of the highway. However, there is a dearth of research dealing with traffic safety relating to other highway facilities, especially ramp areas. According to the Korea Expressway Corporation's Expressway Information Service, 6,717 crashes have occurred on ramps in the five years from 2015~2019, which accounts for about 15% of all highway accidents. In this study, the simple and full safety performance functions (SPFs) were evaluated and explored using different statistical distributions (i.e., Poisson Gamma (PG) and Poisson Inverse Gaussian (PIG)) and techniques (i.e., fixed effects (FE) and random effects (RE)) to provide more accurate crash prediction models for highway ramp sections. Data on the geometric characteristics of traffic and roadways were collected from various systems and with extensive efforts using a street-view application. The results showed that the PIG models present more accurate crash predictions in general. The results also indicated that the RE models performed better than FE models for simple and full SPFs. The findings from this study offer transportation practitioners using the Korea Expressway Corporation's Expressway a dependable reference to enhance and understand traffic safety in ramp areas based on accurate crash prediction models and empirical evidence.

A Comparative Study on the Productivity by Characteristics of Tenant Companies in Busan New Port Distripark (부산항 신항 배후단지 입주업체 특성별 생산성 비교연구)

  • Kim, Yang-Wook;Cha, Jae-Ung;Kim, Yul-Seong
    • Journal of Navigation and Port Research
    • /
    • v.44 no.6
    • /
    • pp.509-516
    • /
    • 2020
  • Korea has gradually been developing port distriparks in major domestic trade ports to diversify their function and create added-value. New tenant companies are needed to help achieve these goals, but no research has been done on selection criteria. To provide such criteria, this study conducted a comparative analysis of the productivity of tenant companies in Busan New Port Distripark based on their characteristics. SFP (single-factor productivity) was measured using the operational data of 67 companies in the distripark over the past - three years (2017-2019). The results indicate that the logistics business and the manufacturing business have strengths in cargo volume productivity and in sales productivity respectively. Also, Northern distripark, a relatively older facility, was found to be more productive than Ung-dong distripark. Finally, the effect of investment-both foreign and in FAC/EQ (facility and equipment)-on productivity showed an inverse relationship with productivity, with the companies with under-average investments being more productive than those whose investments were above average. Therefore, to enhance the productivity and competitiveness of port distriparks, tenant companies must be subject to supplemented system and law for increasing employment and cargo volume, and reestablished selection criteria.