• 제목/요약/키워드: Inverse Logistics

검색결과 7건 처리시간 0.019초

PKCθ-Mediated PDK1 Phosphorylation Enhances T Cell Activation by Increasing PDK1 Stability

  • Kang, Jung-Ah;Choi, Hyunwoo;Yang, Taewoo;Cho, Steve K.;Park, Zee-Yong;Park, Sung-Gyoo
    • Molecules and Cells
    • /
    • 제40권1호
    • /
    • pp.37-44
    • /
    • 2017
  • PDK1 is essential for T cell receptor (TCR)-mediated activation of $NF-{\kappa}B$, and PDK1-induced phosphorylation of $PKC{\theta}$ is important for TCR-induced $NF-{\kappa}B$ activation. However, inverse regulation of PDK1 by $PKC{\theta}$ during T cell activation has not been investigated. In this study, we found that $PKC{\theta}$ is involved in human PDK1 phosphorylation and that its kinase activity is crucial for human PDK1 phosphorylation. Mass spectrometry analysis of wild-type $PKC{\theta}$ or of kinase-inactive form of $PKC{\theta}$ revealed that $PKC{\theta}$ induced phosphorylation of human PDK1 at Ser-64. This $PKC{\theta}$-induced PDK1 phosphorylation positively regulated T cell activation and TCR-induced $NF-{\kappa}B$ activation. Moreover, phosphorylation of human PDK1 at Ser-64 increased the stability of human PDK1 protein. These results suggest that Ser-64 is an important phosphorylation site that is part of a positive feedback loop for human PDK1-$PKC{\theta}$-mediated T cell activation.

Extreme Learning Machine을 이용한 자기부상 물류이송시스템 모델링 (Modeling of Magentic Levitation Logistics Transport System Using Extreme Learning Machine)

  • 이보훈;조재훈;김용태
    • 전자공학회논문지
    • /
    • 제50권1호
    • /
    • pp.269-275
    • /
    • 2013
  • 본 논문에서는 Extreme Learning Machine(ELM)을 이용한 자기부상시스템 모델링 기법을 제안한다. 자기부상시스템의 모델링을 위하여 일반적으로 테일러 급수를 이용한 선형화 모델이 사용되어져 왔으나, 이런 수학적 기법의 경우 자기부상시스템의 비선형 반영에 한계가 있다는 단점을 가지고 있다. 이러한 단점을 극복하기 위해 본 논문에서는 학습시간이 빠른 특성을 가진 ELM을 이용한 자기부상시스템의 모델링 기법을 제안한다. 제안된 기법은 입력 가중치들과 은닉 바이어스들의 초기값을 무작위로 선택하고 출력 가중치들은 Moore-Penrose의 일반화된 역행렬 방법을 통하여 구해진다. 실험을 통하여 제안된 알고리즘이 자기부상시스템의 모델링에서 수학적 기법에 비해 우수한 성능을 보임을 알 수 있었다.

The Balancing of Disassembly Line of Automobile Engine Using Genetic Algorithm (GA) in Fuzzy Environment

  • Seidi, Masoud;Saghari, Saeed
    • Industrial Engineering and Management Systems
    • /
    • 제15권4호
    • /
    • pp.364-373
    • /
    • 2016
  • Disassembly is one of the important activities in treating with the product at the End of Life time (EOL). Disassembly is defined as a systematic technique in dividing the products into its constituent elements, segments, sub-assemblies, and other groups. We concern with a Fuzzy Disassembly Line Balancing Problem (FDLBP) with multiple objectives in this article that it needs to allocation of disassembly tasks to the ordered group of disassembly Work Stations. Tasks-processing times are fuzzy numbers with triangular membership functions. Four objectives are acquired that include: (1) Minimization of number of disassembly work stations; (2) Minimization of sum of idle time periods from all work stations by ensuring from similar idle time at any work-station; (3) Maximization of preference in removal the hazardous parts at the shortest possible time; and (4) Maximization of preference in removal the high-demand parts before low-demand parts. This suggested model was initially solved by GAMS software and then using Genetic Algorithm (GA) in MATLAB software. This model has been utilized to balance automotive engine disassembly line in fuzzy environment. The fuzzy results derived from two software programs have been compared by ranking technique using mean and fuzzy dispersion with each other. The result of this comparison shows that genetic algorithm and solving it by MATLAB may be assumed as an efficient solution and effective algorithm to solve FDLBP in terms of quality of solution and determination of optimal sequence.

A Study on the Development of LDA Algorithm-Based Financial Technology Roadmap Using Patent Data

  • Koopo KWON;Kyounghak LEE
    • 한국인공지능학회지
    • /
    • 제12권3호
    • /
    • pp.17-24
    • /
    • 2024
  • This study aims to derive a technology development roadmap in related fields by utilizing patent documents of financial technology. To this end, patent documents are extracted by dragging technical keywords from prior research and related reports on financial technology. By applying the TF-IDF (Term Frequency-Inverse Document Frequency) technique in the extracted patent document, which is a text mining technique, to the extracted patent documents, the Latent Dirichlet Allocation (LDA) algorithm was applied to identify the keywords and identify the topics of the core technologies of financial technology. Based on the proportion of topics by year, which is the result of LDA, promising technology fields and convergence fields were identified through trend analysis and similarity analysis between topics. A first-stage technology development roadmap for technology field development and a second-stage technology development roadmap for convergence were derived through network analysis about the technology data-based integrated management system of the high-dimensional payment system using RF and intelligent cards, as well as the security processing methodology for data information and network payment, which are identified financial technology fields. The proposed method can serve as a sufficient reason basis for developing financial technology R&D strategies and technology roadmaps.

키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법 (A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model)

  • 조원진;노상규;윤지영;박진수
    • Asia pacific journal of information systems
    • /
    • 제21권1호
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

다양한 통계 기법을 활용한 안전성능함수 개발 및 비교 연구 : 트럼펫형 램프를 중심으로 (Development and Exploration of Safety Performance Functions Using Multiple Modeling Techniques : Trumpet Ramps)

  • 양삼규;박준영;권경주;이현석
    • 한국ITS학회 논문지
    • /
    • 제20권5호
    • /
    • pp.35-44
    • /
    • 2021
  • 최근 고속도로 본선구간에서 발생한 교통사고에 대한 연구가 다수 수행되고 있으나, 램프와 같이 본선 외 구간에 대한 교통안전을 다루는 연구는 미미한 실정이다. 최근 5년(2015년~2019년)동안 램프에서 발생한 사고는 총 6,717건으로 이는 전체 고속도로 사고의 약 15%를 차지한다. 본 연구에서는 고속도로 램프구간에 대해 보다 정확한 사고 예측 모형을 제공하기 위해 포아송 감마(PG)와 포아송 역가우스(PIG)와 같은 다양한 통계 분포를 비롯하여 랜덤효과와 같은 기법을 적용하여 Simple 및 Full SPF를 구축하고 비교하였다. 교통 및 도로 기하구조 데이터는 로드뷰와 같은 다양한 시스템에서 수집되었다. 분석 결과, PIG 모형은 일반적으로 더 정확한 사고 예측을 제시하며, Simpe SPF와 Full SPF 모두에서 임의효과 모형이 더욱 우수한 성능을 나타내었다. 본 연구결과는 교통실무자들에게 정확한 사고 예측 모형을 기반으로 램프구간 교통안전을 증대시키고 이해할 수 있는 참고자료로써 활용될 수 있다.

부산항 신항 배후단지 입주업체 특성별 생산성 비교연구 (A Comparative Study on the Productivity by Characteristics of Tenant Companies in Busan New Port Distripark)

  • 김양욱;차재웅;김율성
    • 한국항해항만학회지
    • /
    • 제44권6호
    • /
    • pp.509-516
    • /
    • 2020
  • 우리나라는 항만의 기능 다각화 및 부가가치 창출을 목표로 국내 주요 무역항에 항만배후단지를 단계적으로 개발해왔다. 그러나 이러한 목표를 달성하기 위한 배후단지 입주업체 선정기준에 관한 연구는 아직 부족한 실정이다. 본 연구는 부산항 신항 배후단지를 대상으로 입주업체 특성별 생산성을 비교분석하여 입주업체 선정기준 마련에 기여하고자 하였다. 분석을 위해 67개 업체들의 최근 3년(2017-2019) 간 운영실적 자료를 수집하여 단일요소생산성을 측정하였다. 그 결과, 물류업은 물동량, 제조업은 매출액 측면의 생산성에 강점이 있는 것으로 나타났다. 또한, 상대적으로 개장시기가 빨랐던 북컨테이너 배후단지가 웅동 배후단지에 비해 전반적인 생산성이 높았다. 마지막으로 외국인과 시설·설비 투자규모에 따른 생산성의 경우 전반적으로 투자규모가 평균 미만인 업체들이 평균 이상인 업체들보다 생산성이 높은 것으로 분석되어 부정적인 상관관계를 보였다. 따라서 항만배후단지의 생산성 및 경쟁력을 강화하기 위해서는 입주업체들의 고용 및 화물 창출 능력을 제고할 수 있는 제도적·법률적 보완과 신규 입주업체 선정기준의 재확립이 필요하다고 판단된다.