• Title/Summary/Keyword: Computing amount

Search Result 687, Processing Time 0.028 seconds

Development of A Network loading model for Dynamic traffic Assignment (동적 통행배정모형을 위한 교통류 부하모형의 개발)

  • 임강원
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.3
    • /
    • pp.149-158
    • /
    • 2002
  • For the purpose of preciously describing real time traffic pattern in urban road network, dynamic network loading(DNL) models able to simulate traffic behavior are required. A number of different methods are available, including macroscopic, microscopic dynamic network models, as well as analytical model. Equivalency minimization problem and Variation inequality problem are the analytical models, which include explicit mathematical travel cost function for describing traffic behaviors on the network. While microscopic simulation models move vehicles according to behavioral car-following and cell-transmission. However, DNL models embedding such travel time function have some limitations ; analytical model has lacking of describing traffic characteristics such as relations between flow and speed, between speed and density Microscopic simulation models are the most detailed and realistic, but they are difficult to calibrate and may not be the most practical tools for large-scale networks. To cope with such problems, this paper develops a new DNL model appropriate for dynamic traffic assignment(DTA), The model is combined with vertical queue model representing vehicles as vertical queues at the end of links. In order to compare and to assess the model, we use a contrived example network. From the numerical results, we found that the DNL model presented in the paper were able to describe traffic characteristics with reasonable amount of computing time. The model also showed good relationship between travel time and traffic flow and expressed the feature of backward turn at near capacity.

An Empirical Study on the Influencing Factors of Perceived Job Performance in the Context of Enterprise Mobile Applications (업무성과에 영향을 주는 업무용 모바일 어플리케이션의 주요 요인에 관한 연구)

  • Chung, Sunghun;Kim, Kimin
    • Asia pacific journal of information systems
    • /
    • v.24 no.1
    • /
    • pp.31-50
    • /
    • 2014
  • The ubiquitous accessibility of information through mobile devices has led to an increased mobility of workers from their fixed workplaces. Market researchers estimate that by 2016, 350 million workers will be using their smartphones for business purposes, and the use of smartphones will offer new business benefits. Enterprises are now adopting mobile technologies for numerous applications to increase their operational efficiency, improve their responsiveness and competitiveness, and cultivate their innovativeness. For these reasons, various organizational aspects concerning "mobile work" have received a great deal of recent attention. Moreover, many CIOs plan to allocate a considerable amount of their budgets mobile work environments. In particular, with the consumerization of information technology, enterprise mobile applications (EMA) have played a significant role in the explosive growth of mobile computing in the workplace, and even in improving sales for firms in this field. EMA can be defined as mobile technologies and role-based applications, as companies design them for specific roles and functions in organizations. Technically, EMA can be defined as business enterprise systems, including critical business functions that enable users to access enterprise systems via wireless mobile devices, such as smartphones or tablets. Specifically, EMA enables employees to have greater access to real-time information, and provides them with simple features and functionalities that are easy for them to complete specific tasks. While the impact of EMA on organizational workers' productivity has been given considerable attention in various literatures, relatively little research effort has been made to examine how EMA actually lead to users' job performance. In particular, we have a limited understanding of what the key antecedents are of such an EMA usage outcome. In this paper, we focus on employees' perceived job performance as the outcome of EMA use, which indicates the successful role of EMA with regard to employees' tasks. Thus, to develop a deeper understanding of the relationship among EMA, its environment, and employees' perceived job performance, we develop a comprehensive model that considers the perceived-fit between EMA and employees' tasks, satisfaction on EMA, and the organizational environment. With this model, we try to examine EMA to explain how job performance through EMA is revealed from both the task-technology fit for EMA and satisfaction on EMA, while also considering the antecedent factors for these constructs. The objectives of this study are to address the following research questions: (1) How can employees successfully manage EMA in order to enhance their perceived job performance? (2) What internal and/or external factors are important antecedents in increasing EMA users' satisfaction on MES and task-technology fit for EMA? (3) What are the impacts of organizational (e.g. organizational agility), and task-related antecedents (e.g., task mobility) on task-technology fit for EMA? (4) What are the impacts of internal (e.g., self-efficacy) and external antecedents (e.g., system reputation) for the habitual use of EMA? Based on a survey from 254 actual employees who use EMA in their workplace across industries, our results indicate that task-technology fit for EMA and satisfaction on EMA are positively associated with job performance. We also identify task mobility, organizational agility, and system accessibility that are found to be positively associated with task-technology fit for EMA. Further, we find that external factor, such as the reputation of EMA, and internal factor, such as self-efficacy for EMA that are found to be positively associated with the satisfaction of EMA. The present findings enable researchers and practitioners to understand the role of EMA, which facilitates organizational workers' efficient work processes, as well as the importance of task-technology fit for EMA. Our model provides a new set of antecedents and consequence variables for a TAM involving mobile applications. The research model also provides empirical evidence that EMA are important mobile services that positively influence individuals' performance. Our findings suggest that perceived organizational agility and task mobility do have a significant influence on task-technology fit for EMA usage through positive beliefs about EMA, that self-efficacy and system reputation can also influence individuals' satisfaction on EMA, and that these factors are important contingent factors for the impact of system satisfaction and perceived job performance. Our findings can help managers gauge the impact of EMA in terms of its contribution to job performance. Our results provide an explanation as to why many firms have recently adopted EMA for efficient business processes and productivity support. Our findings additionally suggest that the cognitive fit between task and technology can be an important requirement for the productivity support of EMA. Further, our study findings can help managers in formulating their strategies and building organizational culture that can affect employees perceived job performance. Managers, thus, can tailor their dependence on EMA as high or low, depending on their task's characteristics, to maximize the job performance in the workplace. Overall, this study strengthens our knowledge regarding the impact of mobile applications in organizational contexts, technology acceptance and the role of task characteristics. To conclude, we hope that our research inspires future studies exploring digital productivity in the workplace and/or taking the role of EMA into account for employee job performance.

IPC Multi-label Classification based on Functional Characteristics of Fields in Patent Documents (특허문서 필드의 기능적 특성을 활용한 IPC 다중 레이블 분류)

  • Lim, Sora;Kwon, YongJin
    • Journal of Internet Computing and Services
    • /
    • v.18 no.1
    • /
    • pp.77-88
    • /
    • 2017
  • Recently, with the advent of knowledge based society where information and knowledge make values, patents which are the representative form of intellectual property have become important, and the number of the patents follows growing trends. Thus, it needs to classify the patents depending on the technological topic of the invention appropriately in order to use a vast amount of the patent information effectively. IPC (International Patent Classification) is widely used for this situation. Researches about IPC automatic classification have been studied using data mining and machine learning algorithms to improve current IPC classification task which categorizes patent documents by hand. However, most of the previous researches have focused on applying various existing machine learning methods to the patent documents rather than considering on the characteristics of the data or the structure of patent documents. In this paper, therefore, we propose to use two structural fields, technical field and background, considered as having impacts on the patent classification, where the two field are selected by applying of the characteristics of patent documents and the role of the structural fields. We also construct multi-label classification model to reflect what a patent document could have multiple IPCs. Furthermore, we propose a method to classify patent documents at the IPC subclass level comprised of 630 categories so that we investigate the possibility of applying the IPC multi-label classification model into the real field. The effect of structural fields of patent documents are examined using 564,793 registered patents in Korea, and 87.2% precision is obtained in the case of using title, abstract, claims, technical field and background. From this sequence, we verify that the technical field and background have an important role in improving the precision of IPC multi-label classification in IPC subclass level.

Odysseus/Parallel-OOSQL: A Parallel Search Engine using the Odysseus DBMS Tightly-Coupled with IR Capability (오디세우스/Parallel-OOSQL: 오디세우스 정보검색용 밀결합 DBMS를 사용한 병렬 정보 검색 엔진)

  • Ryu, Jae-Joon;Whang, Kyu-Young;Lee, Jae-Gil;Kwon, Hyuk-Yoon;Kim, Yi-Reun;Heo, Jun-Suk;Lee, Ki-Hoon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.4
    • /
    • pp.412-429
    • /
    • 2008
  • As the amount of electronic documents increases rapidly with the growth of the Internet, a parallel search engine capable of handling a large number of documents are becoming ever important. To implement a parallel search engine, we need to partition the inverted index and search through the partitioned index in parallel. There are two methods of partitioning the inverted index: 1) document-identifier based partitioning and 2) keyword-identifier based partitioning. However, each method alone has the following drawbacks. The former is convenient in inserting documents and has high throughput, but has poor performance for top h query processing. The latter has good performance for top-k query processing, but is inconvenient in inserting documents and has low throughput. In this paper, we propose a hybrid partitioning method to compensate for the drawback of each method. We design and implement a parallel search engine that supports the hybrid partitioning method using the Odysseus DBMS tightly coupled with information retrieval capability. We first introduce the architecture of the parallel search engine-Odysseus/parallel-OOSQL. We then show the effectiveness of the proposed system through systematic experiments. The experimental results show that the query processing time of the document-identifier based partitioning method is approximately inversely proportional to the number of blocks in the partition of the inverted index. The results also show that the keyword-identifier based partitioning method has good performance in top-k query processing. The proposed parallel search engine can be optimized for performance by customizing the methods of partitioning the inverted index according to the application environment. The Odysseus/parallel OOSQL parallel search engine is capable of indexing, storing, and querying 100 million web documents per node or tens of billions of web documents for the entire system.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Automatic Text Extraction from News Video using Morphology and Text Shape (형태학과 문자의 모양을 이용한 뉴스 비디오에서의 자동 문자 추출)

  • Jang, In-Young;Ko, Byoung-Chul;Kim, Kil-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.4
    • /
    • pp.479-488
    • /
    • 2002
  • In recent years the amount of digital video used has risen dramatically to keep pace with the increasing use of the Internet and consequently an automated method is needed for indexing digital video databases. Textual information, both superimposed and embedded scene texts, appearing in a digital video can be a crucial clue for helping the video indexing. In this paper, a new method is presented to extract both superimposed and embedded scene texts in a freeze-frame of news video. The algorithm is summarized in the following three steps. For the first step, a color image is converted into a gray-level image and applies contrast stretching to enhance the contrast of the input image. Then, a modified local adaptive thresholding is applied to the contrast-stretched image. The second step is divided into three processes: eliminating text-like components by applying erosion, dilation, and (OpenClose+CloseOpen)/2 morphological operations, maintaining text components using (OpenClose+CloseOpen)/2 operation with a new Geo-correction method, and subtracting two result images for eliminating false-positive components further. In the third filtering step, the characteristics of each component such as the ratio of the number of pixels in each candidate component to the number of its boundary pixels and the ratio of the minor to the major axis of each bounding box are used. Acceptable results have been obtained using the proposed method on 300 news images with a recognition rate of 93.6%. Also, my method indicates a good performance on all the various kinds of images by adjusting the size of the structuring element.

Study on the Difference in Intake Rate by Kidney in Accordance with whether the Bladder is Shielded and Injection method in 99mTc-DMSA Renal Scan for Infants (소아 99mTc-DMSA renal scan에서 방광차폐유무와 방사성동위원소 주입방법에 따른 콩팥섭취율 차이에 관한 연구)

  • Park, Jeong Kyun;Cha, Jae Hoon;Kim, Kwang Hyun;An, Jong Ki;Hong, Da Young;Seong, Hyo Jin
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.20 no.2
    • /
    • pp.27-31
    • /
    • 2016
  • Purpose $^{99m}Tc-DMSA$ renal scan is a test for the comparison of the function by imaging the parenchyma of the kidneys by the cortex of a kidney and by computing the intake ratio of radiation by the left and right kidney. Since the distance between the kidneys and the bladder is not far given the bodily structure of an infant, the bladder is included in the examination domain. Research was carried out with the presumption that counts of bladder would impart an influence on the kidneys at the time of this renal scan. In consideration of the special feature that only a trace amount of a RI is injected in a pediatric examination, research on the method of injection was also carried out concurrently. Materials and Methods With 34 infants aged between 1 month to 12 months for whom a $^{99m}Tc-DMSA$ renal scan was implemented on the subjects, a Post IMAGE was acquired in accordance with the test time after having injected the same quantity of DMSA of 0.5mCi. Then, after having acquired an additional image by shielding the bladder by using a circular lead plate for comparison purposes, a comparison was made by illustrating the percentile of (Lt. Kidney counts + Rt. Kidney counts)/ Total counts, by drawing the same sized ROI (length of 55.2mm X width of 70.0mm). In addition, in the format of a 3-way stopcock, a Heparin cap and direct injection into the patient were performed in accordance with RI injection methods. The differences in the count changes in accordance with each of the methods were compared by injecting an additional 2cc of saline into the 3-way stopcock and Heparin cap. Results The image prior to shielding of the bladder displayed a kidney intake rate with a deviation of $70.9{\pm}3.18%$ while the image after the shielding of the bladder displayed a kidney intake rate with a deviation of $79.4{\pm}5.19%$, thereby showing approximately 6.5~8.5% of difference. In terms of the injection method, the method that used the 3-way form, a deviation of $68.9{\pm}2.80%$ prior to the shielding and a deviation of $78.1{\pm}5.14%$ after the shielding were displayed. In the method of using a Heparin cap, a deviation of $71.3{\pm}5.14%$ prior to the shielding and a deviation of $79.8{\pm}3.26%$ after the shielding were displayed. Lastly, in the method of direct injection into the patient, a deviation of $75.1{\pm}4.30%$ prior to the shielding and a deviation of $82.1{\pm}2.35%$ after the shielding were displayed, thereby illustrating differences in the kidney intake rates in the order of direct injection, a Heparin cap and the 3-way methods. Conclusion Since a substantially minute quantity of radiopharmaceuticals is injected for infants in comparison to adults, the cases of having shielded the bladder by removing radiation of the bladder displayed kidney intake rates that are improved from those of the cases of not having shielded the bladder. Although there are difficulties in securing blood vessels, it is deemed that the method of direct injection would be more helpful in acquisition of better images since it displays improved kidney intake rate in comparison to other methods.

  • PDF