• Title/Summary/Keyword: Transactions

Search Result 45,732, Processing Time 0.059 seconds

Study of Mg2Ni1-xFex Alloys by Mössbauer Resonance (Mössbauer 공명에 의한 Mg2Ni1-xFex 합금의 연구)

  • Song, MyoungYoup
    • Journal of Hydrogen and New Energy
    • /
    • v.10 no.2
    • /
    • pp.119-130
    • /
    • 1999
  • After preparing $Mg_2Ni_{1-x}{^{57}}Fe_x$(x=0.015, 0.03, 0.06, 0.12 and 0.24) alloys, they were studied by $M{\ddot{o}}ssbauer$ resonance. The $M{\ddot{o}}ssbauer$ spectra of x=0.015 and 0.03 alloys exhibit two doublets (doublet 1, 2). That of x=0.06 alloys shows two doublets (doublet 1,2) and one six-line, and those of x=0.12 and 0.24 alloys have only one six-line. The doublet 1 for x=0.015, 0.03 and 0.06 alloys is considered to result from a fraction of Fe in excess showing a superparamagnetic behavior. The doublet 2 is considered to result from the Fe substituted for Ni in the $Mg_2Ni$ phase. The values of isomer shift 0.24 ~ 0.28 mm/s suggest that the iron exist in the state $Fe^{+3}$. The result that the quadrapole splitting of the doublet 2 is not zero shows that the distribution of electrons around the iron is asymmetric. Their values for the doublet 2, 1.20 ~ 1.38 mm/s, approach the value of quadrapole for the oxidation number +3. The six-line showing the magnetic hyperfine interactions results from the iron which has not substituted the nickel in the $Mg_2Ni$ phase. The $M{\ddot{o}}ssbauer$ spectra of the hydrided alloys with x=0.015 and 0.03 show six-line. This suggests that the iron segregates with the hydriding reaction. The analysis results of the $M{\ddot{o}}ssbauer$ spectrum, the variation of magnetization with magnetic field, Auger electron spectroscopy and electron diffraction show the segregation of Ni and the formation of MgO. This is considered to result from the reaction of the $Mg_2Ni$ phase with the oxygen contained in the hydrogen as impurity.

  • PDF

A Methodology for Extracting Shopping-Related Keywords by Analyzing Internet Navigation Patterns (인터넷 검색기록 분석을 통한 쇼핑의도 포함 키워드 자동 추출 기법)

  • Kim, Mingyu;Kim, Namgyu;Jung, Inhwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.123-136
    • /
    • 2014
  • Recently, online shopping has further developed as the use of the Internet and a variety of smart mobile devices becomes more prevalent. The increase in the scale of such shopping has led to the creation of many Internet shopping malls. Consequently, there is a tendency for increasingly fierce competition among online retailers, and as a result, many Internet shopping malls are making significant attempts to attract online users to their sites. One such attempt is keyword marketing, whereby a retail site pays a fee to expose its link to potential customers when they insert a specific keyword on an Internet portal site. The price related to each keyword is generally estimated by the keyword's frequency of appearance. However, it is widely accepted that the price of keywords cannot be based solely on their frequency because many keywords may appear frequently but have little relationship to shopping. This implies that it is unreasonable for an online shopping mall to spend a great deal on some keywords simply because people frequently use them. Therefore, from the perspective of shopping malls, a specialized process is required to extract meaningful keywords. Further, the demand for automating this extraction process is increasing because of the drive to improve online sales performance. In this study, we propose a methodology that can automatically extract only shopping-related keywords from the entire set of search keywords used on portal sites. We define a shopping-related keyword as a keyword that is used directly before shopping behaviors. In other words, only search keywords that direct the search results page to shopping-related pages are extracted from among the entire set of search keywords. A comparison is then made between the extracted keywords' rankings and the rankings of the entire set of search keywords. Two types of data are used in our study's experiment: web browsing history from July 1, 2012 to June 30, 2013, and site information. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The original sample dataset contains 150 million transaction logs. First, portal sites are selected, and search keywords in those sites are extracted. Search keywords can be easily extracted by simple parsing. The extracted keywords are ranked according to their frequency. The experiment uses approximately 3.9 million search results from Korea's largest search portal site. As a result, a total of 344,822 search keywords were extracted. Next, by using web browsing history and site information, the shopping-related keywords were taken from the entire set of search keywords. As a result, we obtained 4,709 shopping-related keywords. For performance evaluation, we compared the hit ratios of all the search keywords with the shopping-related keywords. To achieve this, we extracted 80,298 search keywords from several Internet shopping malls and then chose the top 1,000 keywords as a set of true shopping keywords. We measured precision, recall, and F-scores of the entire amount of keywords and the shopping-related keywords. The F-Score was formulated by calculating the harmonic mean of precision and recall. The precision, recall, and F-score of shopping-related keywords derived by the proposed methodology were revealed to be higher than those of the entire number of keywords. This study proposes a scheme that is able to obtain shopping-related keywords in a relatively simple manner. We could easily extract shopping-related keywords simply by examining transactions whose next visit is a shopping mall. The resultant shopping-related keyword set is expected to be a useful asset for many shopping malls that participate in keyword marketing. Moreover, the proposed methodology can be easily applied to the construction of special area-related keywords as well as shopping-related ones.

User-Perspective Issue Clustering Using Multi-Layered Two-Mode Network Analysis (다계층 이원 네트워크를 활용한 사용자 관점의 이슈 클러스터링)

  • Kim, Jieun;Kim, Namgyu;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.93-107
    • /
    • 2014
  • In this paper, we report what we have observed with regard to user-perspective issue clustering based on multi-layered two-mode network analysis. This work is significant in the context of data collection by companies about customer needs. Most companies have failed to uncover such needs for products or services properly in terms of demographic data such as age, income levels, and purchase history. Because of excessive reliance on limited internal data, most recommendation systems do not provide decision makers with appropriate business information for current business circumstances. However, part of the problem is the increasing regulation of personal data gathering and privacy. This makes demographic or transaction data collection more difficult, and is a significant hurdle for traditional recommendation approaches because these systems demand a great deal of personal data or transaction logs. Our motivation for presenting this paper to academia is our strong belief, and evidence, that most customers' requirements for products can be effectively and efficiently analyzed from unstructured textual data such as Internet news text. In order to derive users' requirements from textual data obtained online, the proposed approach in this paper attempts to construct double two-mode networks, such as a user-news network and news-issue network, and to integrate these into one quasi-network as the input for issue clustering. One of the contributions of this research is the development of a methodology utilizing enormous amounts of unstructured textual data for user-oriented issue clustering by leveraging existing text mining and social network analysis. In order to build multi-layered two-mode networks of news logs, we need some tools such as text mining and topic analysis. We used not only SAS Enterprise Miner 12.1, which provides a text miner module and cluster module for textual data analysis, but also NetMiner 4 for network visualization and analysis. Our approach for user-perspective issue clustering is composed of six main phases: crawling, topic analysis, access pattern analysis, network merging, network conversion, and clustering. In the first phase, we collect visit logs for news sites by crawler. After gathering unstructured news article data, the topic analysis phase extracts issues from each news article in order to build an article-news network. For simplicity, 100 topics are extracted from 13,652 articles. In the third phase, a user-article network is constructed with access patterns derived from web transaction logs. The double two-mode networks are then merged into a quasi-network of user-issue. Finally, in the user-oriented issue-clustering phase, we classify issues through structural equivalence, and compare these with the clustering results from statistical tools and network analysis. An experiment with a large dataset was performed to build a multi-layer two-mode network. After that, we compared the results of issue clustering from SAS with that of network analysis. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The sample dataset contains 150 million transaction logs and 13,652 news articles of 5,000 panels over one year. User-article and article-issue networks are constructed and merged into a user-issue quasi-network using Netminer. Our issue-clustering results applied the Partitioning Around Medoids (PAM) algorithm and Multidimensional Scaling (MDS), and are consistent with the results from SAS clustering. In spite of extensive efforts to provide user information with recommendation systems, most projects are successful only when companies have sufficient data about users and transactions. Our proposed methodology, user-perspective issue clustering, can provide practical support to decision-making in companies because it enhances user-related data from unstructured textual data. To overcome the problem of insufficient data from traditional approaches, our methodology infers customers' real interests by utilizing web transaction logs. In addition, we suggest topic analysis and issue clustering as a practical means of issue identification.

Intents of Acquisitions in Information Technology Industrie (정보기술 산업에서의 인수 유형별 인수 의도 분석)

  • Cho, Wooje;Chang, Young Bong;Kwon, Youngok
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.123-138
    • /
    • 2016
  • This study investigates intents of acquisitions in information technology industries. Mergers and acquisitions are a strategic decision at corporate-level and have been an important tool for a firm to grow. Plenty of firms in information technology industries have acquired startups to increase production efficiency, expand customer base, or improve quality over the last decades. For example, Google has made about 200 acquisitions since 2001, Cisco has acquired about 210 firms since 1993, Oracle has made about 125 acquisitions since 1994, and Microsoft has acquired about 200 firms since 1987. Although there have been many existing papers that theoretically study intents or motivations of acquisitions, there are limited papers that empirically investigate them mainly because it is challenging to measure and quantify intents of M&As. This study examines the intent of acquisitions by measuring specific intents for M&A transactions. Using our measures of acquisition intents, we compare the intents by four acquisition types: (1) the acquisition where a hardware firm acquires a hardware firm, (2) the acquisition where a hardware firm acquires a software/IT service firm, (3) the acquisition where a software/IT service firm acquires a hardware firm, and (4) the acquisition where a software /IT service firm acquires a software/IT service firm. We presume that there are difference in reasons why a hardware firm acquires another hardware firm, why a hardware firm acquires a software firm, why a software/IT service firm acquires a hardware firm, and why a software/IT service firm acquires another software/IT service firm. Using data of the M&As in US IT industries, we identified major intents of the M&As. The acquisition intents are identified based on the press release of M&A announcements and measured with four categories. First, an acquirer may have intents of cost saving in operations by sharing common resources between the acquirer and the target. The cost saving can accrue from economies of scope and scale. Second, an acquirer may have intents of product enhancement/development. Knowledge and skills transferred from the target may enable the acquirer to enhance the product quality or to expand product lines. Third, an acquirer may have intents of gain additional customer base to expand the market, to penetrate the market, or to enter a foreign market. Fourth, a firm may acquire a target with intents of expanding customer channels. By complementing existing channel to the customer, the firm can increase its revenue. Our results show that acquirers have had intents of cost saving more in acquisitions between hardware companies than in acquisitions between software companies. Hardware firms are more likely to acquire with intents of product enhancement or development than software firms. Overall, the intent of product enhancement/development is the most frequent intent in all of the four acquisition types, and the intent of customer base expansion is the second. We also analyze our data with the classification of production-side intents and customer-side intents, which is based on activities of the value chain of a firm. Intents of cost saving operations and those of product enhancement/development can be viewed as production-side intents and intents of customer base expansion and those of expanding customer channels can be viewed as customer-side intents. Our analysis shows that the ratio between the number of customer-side intents and that of production-side intents is higher in acquisitions where a software firm is an acquirer than in the acquisitions where a hardware firm is an acquirer. This study can contribute to IS literature. First, this study provides insights in understanding M&As in IT industries by answering for question of why an IT firm intends to another IT firm. Second, this study also provides distribution of acquisition intents for acquisition types.

Scheduling Algorithms and Queueing Response Time Analysis of the UNIX Operating System (UNIX 운영체제에서의 스케줄링 법칙과 큐잉응답 시간 분석)

  • Im, Jong-Seol
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.3
    • /
    • pp.367-379
    • /
    • 1994
  • This paper describes scheduling algorithms of the UNIX operating system and shows an analytical approach to approximate the average conditional response time for a process in the UNIX operating system. The average conditional response time is the average time between the submittal of a process requiring a certain amount of the CPU time and the completion of the process. The process scheduling algorithms in thr UNIX system are based on the priority service disciplines. That is, the behavior of a process is governed by the UNIX process schuduling algorithms that (ⅰ) the time-shared computer usage is obtained by allotting each request a quantum until it completes its required CPU time, (ⅱ) the nonpreemptive switching in system mode and the preemptive switching in user mode are applied to determine the quantum, (ⅲ) the first-come-first-serve discipline is applied within the same priority level, and (ⅳ) after completing an allotted quantum the process is placed at the end of either the runnable queue corresponding to its priority or the disk queue where it sleeps. These process scheduling algorithms create the round-robin effect in user mode. Using the round-robin effect and the preemptive switching, we approximate a process delay in user mode. Using the nonpreemptive switching, we approximate a process delay in system mode. We also consider a process delay due to the disk input and output operations. The average conditional response time is then obtained by approximating the total process delay. The results show an excellent response time for the processes requiring system time at the expense of the processes requiring user time.

  • PDF

ATM Cell Encipherment Method using Rijndael Algorithm in Physical Layer (Rijndael 알고리즘을 이용한 물리 계층 ATM 셀 보안 기법)

  • Im Sung-Yeal;Chung Ki-Dong
    • The KIPS Transactions:PartC
    • /
    • v.13C no.1 s.104
    • /
    • pp.83-94
    • /
    • 2006
  • This paper describes ATM cell encipherment method using Rijndael Algorithm adopted as an AES(Advanced Encryption Standard) by NIST in 2001. ISO 9160 describes the requirement of physical layer data processing in encryption/decryption. For the description of ATM cell encipherment method, we implemented ATM data encipherment equipment which satisfies the requirements of ISO 9160, and verified the encipherment/decipherment processing at ATM STM-1 rate(155.52Mbps). The DES algorithm can process data in the block size of 64 bits and its key length is 64 bits, but the Rijndael algorithm can process data in the block size of 128 bits and the key length of 128, 192, or 256 bits selectively. So it is more flexible in high bit rate data processing and stronger in encription strength than DES. For tile real time encryption of high bit rate data stream. Rijndael algorithm was implemented in FPGA in this experiment. The boundary of serial UNI cell was detected by the CRC method, and in the case of user data cell the payload of 48 octets (384 bits) is converted in parallel and transferred to 3 Rijndael encipherment module in the block size of 128 bits individually. After completion of encryption, the header stored in buffer is attached to the enciphered payload and retransmitted in the format of cell. At the receiving end, the boundary of ceil is detected by the CRC method and the payload type is decided. n the payload type is the user data cell, the payload of the cell is transferred to the 3-Rijndael decryption module in the block sire of 128 bits for decryption of data. And in the case of maintenance cell, the payload is extracted without decryption processing.

A Processing of Progressive Aspect "te-iru" in Japanese-Korean Machine Translation (일한기계번역에서 진행형 "ている"의 번역처리)

  • Kim, Jeong-In;Mun, Gyeong-Hui;Lee, Jong-Hyeok
    • The KIPS Transactions:PartB
    • /
    • v.8B no.6
    • /
    • pp.685-692
    • /
    • 2001
  • This paper describes how to disambiguate the aspectual meaning of Japanese expression "-te iru" in Japanese-Korean machine translation Due to grammatical similarities of both languages, almost all Japanese- Korean MT systems have been developed under the direct MT strategy, in which the lexical disambiguation is essential to high-quality translation. Japanese has a progressive aspectual marker “-te iru" which is difficult to translate into Korean equivalents because in Korean there are two different progressive aspectual markers: "-ko issta" for "action progressive" and "-e issta" for "state progressive". Moreover, the aspectual system of both languages does not quite coincide with each other, so the Korean progressive aspect could not be determined by Japanese meaning of " te iru" alone. The progressive aspectural meaning may be parially determined by the meaning of predicates and also the semantic meaning of predicates may be partially reshicted by adverbials, so all Japanese predicates are classified into five classes : the 1nd verb is used only for "action progrssive",2nd verb generally for "action progressive" but occasionally for "state progressive", the 3rd verb only for "state progressive", the 4th verb generally for "state progressive", but occasIonally for "action progressive", and the 5th verb for the others. Some heuristic rules are defined for disambiguation of the 2nd and 4th verbs on the basis of adverbs and abverbial phrases. In an experimental evaluation using more than 15,000 sentances from "Asahi newspapers", the proposed method improved the translation quality by about 5%, which proves that it is effective in disambiguating "-te iru" for Japanese-Korean machine translation.translation quality by about 5%, which proves that it is effective in disambiguating "-te iru" for Japanese-Korean machine translation.anslation.

  • PDF

S-MADP : Service based Development Process for Mobile Applications of Medium-Large Scale Project (S-MADP : 중대형 프로젝트의 모바일 애플리케이션을 위한 서비스 기반 개발 프로세스)

  • Kang, Tae Deok;Kim, Kyung Baek;Cheng, Ki Ju
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.8
    • /
    • pp.555-564
    • /
    • 2013
  • Innovative evolution in mobile devices along with recent spread of Tablet PCs and Smart Phones makes a new change not only in individual life but also in enterprise applications. Especially, in the case of medium-large mobile applications for large enterprises which generally takes more than 3 months of development periods, importance and complexity increase significantly. Generally Agile-methodology is used for a development process for the medium-large scale mobile applications, but some issues arise such as high dependency on skilled developers and lack of detail development directives. In this paper, S-MADP (Smart Mobile Application Development Process) is proposed to mitigate these issues. S-MADP is a service oriented development process extending a object-oriented development process, for medium-large scale mobile applications. S-MADP provides detail development directives for each activities during the entire process for defining services as server-based or client-based and providing the way of reuse of services. Also, in order to support various user interfaces, S-MADP provides detail UI development directives. To evaluate the performance of S-MADP, three mobile application development projects were conducted and the results were analyzed. The projects are 'TBS(TB Mobile Service) 3.0' in TB company, mobile app-store in TS company, and mobile groupware in TG group. As a result of the projects, S-MADP accounts for more detailed design information about 'Minimizing the use of resources', 'Service-based designing' and 'User interface optimized for mobile devices' which are needed to be largely considered for mobile application development environment when we compare with existing Agile-methodology. Therefore, it improves the usability, maintainability, efficiency of developed mobile applications. Through field tests, it is observed that S-MADP outperforms about 25% than a Agile-methodology in the aspect of the required man-month for developing a medium-large mobile application.

Effect of Cognitive Affordance of Interactive Media Art Content on the Interaction and Interest of Audience (인터랙티브 미디어아트 콘텐츠의 인지적 어포던스가 관람자의 인터랙션과 흥미에 미치는 영향)

  • Lee, Gangso;Choi, Yoo-Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.9
    • /
    • pp.441-450
    • /
    • 2016
  • In this study, we investigate the effect of the level of cognitive affordance which explains an explicit interaction method on the interest of viewers. Viewer's recognition of the interaction method is associated with cognitive affordance as a matter of visual-perceptual exposure of the input device and viewer's cognition of it. The final goal of the research on affordance is to enhance the audience participation rather than the smooth interation. Many interactive media artworks have been designed with hiding the explicit explanation to the artwork due to worry that the explicit explanation may also hinder the induction of impressions leading the viewer to an aesthetic experience and the retainment of interest. In this context, we set up two hypotheses for study on cognitive affordance. First, the more explicit the explanation of interaction method is, the higher the viewer' understanding of interaction method is. Second, the more explicit the explanation of interaction method is, the lower the interest of the viewer is. An interactive media art work was manufactured with three versions which vary in the degree of visual-perceptual information suggestion and we analyzed the participation and interest level of audience in each version. As a result of the experiments, the version with high explicitness of interaction was found to have long time spent on watching and high participation and interest of viewers. On the contrary, the version with an unexplicit interaction method was found to have low interest and satisfaction of viewers. Therefore, regarding usability, the hypothesis that a more explicit explanation of interaction would lower the curiosity and interest in exploration of the viewer was dismissed. It was confirmed that improvement of cognitive affordance raised the interaction of the work of art and interest of the viewer in the proposed interactive content. This study implies that interactive media art work should be designed in view of that the interaction and interest of audience can be lowered when cognitive affordance is low.

A Study on Trust Transfer in Traditional Fintech of Smart Banking (핀테크 서비스에서 오프라인에서 온라인으로의 신뢰전이에 관한 연구 - 스마트뱅킹을 중심으로 -)

  • Ai, Di;Kwon, Sun-Dong;Lee, Su-Chul;Ko, Mi-Hyun;Lee, Bo-Hyung
    • Management & Information Systems Review
    • /
    • v.36 no.3
    • /
    • pp.167-184
    • /
    • 2017
  • In this study, we investigated the effect of offline banking trust on smart banking trust. As influencing factors of smart banking trust, this study compared offline banking trust, smart banking's system quality, and information quality. For the empirical study, 186 questionnaire data were collected from smart banking users and the data were analyzed using Smart-PLS 2.0. As results, it was verified that there is trust transfer in FinTech service, by the significant effect of offline banking trust on smart banking trust. And it was proved that the effect of offline banking trust on smart banking trust is lower than that of smart banking itself. The contribution of this study can be seen in both academic and industrial aspects. First, it is the contribution of the academic aspect. Previous studies on banking were focused on either offline banking or smart banking. But this study, focus on the relationship between offline banking and online banking, proved that offline banking trust affects smart banking trust. Next, it is the industrial contribution. This study showed that offline banking characteristics of traditional commercial banks affect the trust of emerging smart banking service. This means that the emerging FinTech companies are not advantageous in the competition of trust building compared to traditional commercial banks. Unlike traditional commercial banks, the emerging FinTech is innovating the convenience of customers by arming them with new technologies such as mobile Internet, social network, cloud technology, and big data. However, these FinTech strengths alone can not guarantee sufficient trust needed for financial transactions, because banking customers do not change a habit or an inertia that they already have during using traditional banks. Therefore, emerging FinTech companies should strive to create destructive value that reflects the connection with various Internet services and the strength of online interaction such as social services, which have an advantage over customer contacts. And emerging FinTech companies should strive to build service trust, focused on young people with low resistance to new services.

  • PDF