• Title/Summary/Keyword: top-k

Search Result 7,044, Processing Time 0.045 seconds

Radon concentration measurement at general house in Pusan area (부산지역 일반주택에서의 라돈농도측정)

  • Im, In-Cheol
    • Journal of radiological science and technology
    • /
    • v.27 no.2
    • /
    • pp.29-33
    • /
    • 2004
  • Until early 1980s we have lived without thinking that radon ruins our health. But, scientists knew truth that radon radioactive danger is bedeviling on indoor that we live for a long time. Specially, interest about effect that get in danger and injury for Radon and human body is inactive in our country. Recently, with awareness for Radon contamination, We inform about importance and danger of Radon in some station of the Seoul subway, indoor air of school facilities and We had interest with measure and manages. Usually, Radon gas emitted in base of building enters into indoor through building floor split windage back among radon or indoor air of radon daughter nucleus contamination is increased. Therefore, indoor radon concentration rises as there are a lot of windages between number pipe of top and bottom and base that enter crack from estrangement of the done building floor, underground to indoor. Thus, Radon enters into indoor through architecture resources water as well as, kitchen natural gas for choice etc., but more than about 85% from earth's crust emit. Danger and injury of health by Radon and Radon daughter nucleus that is indicated for cause of lung cancer incerases content of uranium of soil rises specially from inside pit of High area and a mine, cave, hermetical space with house. Safe sub-officer of radon concentration can not know and danger always exists large or small during. So, Important thing reduces danger of lung cancer by lowering concentration of Radon within house and building. Therefore, is thought that need general house Radon concentration measurement, measured Radon concentration monthly using Sintillator radon monitor. Study finding appeared high all underground market 1 year than the ground, and the winter appeared high than the summer. Specially, month that pass over 4pCi in house that United States Environmental Protection Agency advises appeared in underground, and appeared and know Radon exposure gravity by 4 months during 12 months. Therefore, Thinking that establishment and regulation of norm and preparation of reduction countermeasure about Radon are pressing feels, and inform result that measure Radon concentration.

  • PDF

Studies on the Evaluation of the Spent Composts of Selenium-Enriched Mushrooms as a Feed Selenium Source (셀레늄강화 버섯폐배지에 대한 사료 셀레늄공급원으로의 평가 연구)

  • Kim, W.Y.;Min, J.K.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.7 no.1
    • /
    • pp.118-130
    • /
    • 2005
  • This study was conducted to evaluate the spent composts of selenium-enriched mushrooms as a feed selenium Source. Total selenium (Se) contents and Se profiles in the spent mushroom composts (SMC) were determined. In addtion, we also investigated the metabolism in relation to Se accumulation in the mushroom. Mushrooms used in this study were Flammulina velutipes and Se enriched mushrooms were grown for 60 days by adding 2 mg of inorganic Se (Na2SeO3) per kg of mushroom composts (MC) on as-fed basis and it was compared with mushrooms not to add Se to the MC. Total Se contents for Se-treated mushrooms were significantly increased (P<0.0001) by 20-fold (4.51 ㎍/g of dry) compared to Se-untreated (0.23 ㎍/g of dry). On the contrary, organic Se proportion was significantly lower (P<0.0001) in the Se-treated mushroom (72.3%) than Se-untreated (100%, not analytically detected of inorganic Se). Se distribution upon a length in the Se-treated mushrooms was the highest in the bottom part (6.86 ㎍/g of dry) near to MC, and top and middle parts were significantly lower (3.71 and 3.01 ㎍/g of dry, respectively) than the bottom (P<0.001). In the SMC from Se-treated mushrooms, a high concentration of Se (5.04 ㎍/g of dry) was still remained, but that from Se-untreated mushrooms was significantly low (P<0.0001) as 0.08 ㎍/g of dry. Se-treated SMC showed a high rate of organic Se (65.67%), suggesting that most of inorganic Se in the SMC was converted to organic Se by mushroom mycelia, and Se-untreated SMC showed 100% of organic Se, not being detected of inorganic Se. Prior to mycelia inoculation in the mushroom culture, the sterilization of MC brought approximately 18% of Se loss in the MC. This result is in accordance with facts generally known that Se is weak in the high temperature and it is consequently volatilized under that condition. Apparent and net accumulation rates (%) for Se into mushrooms were 14.81 and 10.14%, respectively and their difference (4.67%) is considered that it is due to the volatilization into the air via metabolic process of mushroom itself. From the result of this study, inorganic Se addition to MC for mushroom improved the Se content in the mushroom and SMC from Se-enriched mushrooms contained a high concentration of Se. Mycelium and fruiting body from mushrooms converted inorganic Se in MC to organic Se, indicating a high proportion of organic Se in the mushroom and SMC. Therefore, Se in Se-enriched mushroom and SMC was recognized as Se sources of food for human as well as feed for livestock.

Establishment of Test Conditions and Interlaboratory Comparison Study of Neuro-2a Assay for Saxitoxin Detection (Saxitoxin 검출을 위한 Neuro-2a 시험법 조건 확립 및 실험실 간 변동성 비교 연구)

  • Youngjin Kim;Jooree Seo;Jun Kim;Jeong-In Park;Jong Hee Kim;Hyun Park;Young-Seok Han;Youn-Jung Kim
    • Journal of Marine Life Science
    • /
    • v.9 no.1
    • /
    • pp.9-21
    • /
    • 2024
  • Paralytic shellfish poisoning (PSP) including Saxitoxin (STX) is caused by harmful algae, and poisoning occurs when the contaminated seafood is consumed. The mouse bioassay (MBA), a standard test method for detecting PSP, is being sanctioned in many countries due to its low detection limit and the animal concerns. An alternative to the MBA is the Neuro-2a cell-based assay. This study aimed to establish various test conditions for Neuro-2a assay, including cell density, culture conditions, and STX treatment conditions, to suit the domestic laboratory environment. As a result, the initial cell density was set to 40,000 cells/well and the incubation time to 24 hours. Additionally, the concentration of Ouabain and Veratridine (O/V) was set to 500/50 μM, at which most cells died. In this study, we identified eight concentrations of STX, ranging from 368 to 47,056 fg/μl, which produced an S-shaped dose-response curve when treated with O/V. Through inter-laboratory variability comparison of the Neuro-2a assay, we established five Quality Control Criteria to verify the appropriateness of the experiments and six Data Criteria (Top and Bottom OD, EC50, EC20, Hill slop, and R2 of graph) to determine the reliability of the experimental data. The Neuro-2a assay conducted under the established conditions showed an EC50 value of approximately 1,800~3,500 fg/μl. The intra- & inter-lab variability comparison results showed that the coefficients of variation (CVs) for the Quality Control and Data values ranged from 1.98% to 29.15%, confirming the reproducibility of the experiments. This study presented Quality Control Criteria and Data Criteria to assess the appropriateness of the experiments and confirmed the excellent repeatability and reproducibility of the Neuro-2a assay. To apply the Neuro-2a assay as an alternative method for detecting PSP in domestic seafood, it is essential to establish a toxin extraction method from seafood and toxin quantification methods, and perform correlation analysis with MBA and instrumental analysis methods.

Influence analysis of Internet buzz to corporate performance : Individual stock price prediction using sentiment analysis of online news (온라인 언급이 기업 성과에 미치는 영향 분석 : 뉴스 감성분석을 통한 기업별 주가 예측)

  • Jeong, Ji Seon;Kim, Dong Sung;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.37-51
    • /
    • 2015
  • Due to the development of internet technology and the rapid increase of internet data, various studies are actively conducted on how to use and analyze internet data for various purposes. In particular, in recent years, a number of studies have been performed on the applications of text mining techniques in order to overcome the limitations of the current application of structured data. Especially, there are various studies on sentimental analysis to score opinions based on the distribution of polarity such as positivity or negativity of vocabularies or sentences of the texts in documents. As a part of such studies, this study tries to predict ups and downs of stock prices of companies by performing sentimental analysis on news contexts of the particular companies in the Internet. A variety of news on companies is produced online by different economic agents, and it is diffused quickly and accessed easily in the Internet. So, based on inefficient market hypothesis, we can expect that news information of an individual company can be used to predict the fluctuations of stock prices of the company if we apply proper data analysis techniques. However, as the areas of corporate management activity are different, an analysis considering characteristics of each company is required in the analysis of text data based on machine-learning. In addition, since the news including positive or negative information on certain companies have various impacts on other companies or industry fields, an analysis for the prediction of the stock price of each company is necessary. Therefore, this study attempted to predict changes in the stock prices of the individual companies that applied a sentimental analysis of the online news data. Accordingly, this study chose top company in KOSPI 200 as the subjects of the analysis, and collected and analyzed online news data by each company produced for two years on a representative domestic search portal service, Naver. In addition, considering the differences in the meanings of vocabularies for each of the certain economic subjects, it aims to improve performance by building up a lexicon for each individual company and applying that to an analysis. As a result of the analysis, the accuracy of the prediction by each company are different, and the prediction accurate rate turned out to be 56% on average. Comparing the accuracy of the prediction of stock prices on industry sectors, 'energy/chemical', 'consumer goods for living' and 'consumer discretionary' showed a relatively higher accuracy of the prediction of stock prices than other industries, while it was found that the sectors such as 'information technology' and 'shipbuilding/transportation' industry had lower accuracy of prediction. The number of the representative companies in each industry collected was five each, so it is somewhat difficult to generalize, but it could be confirmed that there was a difference in the accuracy of the prediction of stock prices depending on industry sectors. In addition, at the individual company level, the companies such as 'Kangwon Land', 'KT & G' and 'SK Innovation' showed a relatively higher prediction accuracy as compared to other companies, while it showed that the companies such as 'Young Poong', 'LG', 'Samsung Life Insurance', and 'Doosan' had a low prediction accuracy of less than 50%. In this paper, we performed an analysis of the share price performance relative to the prediction of individual companies through the vocabulary of pre-built company to take advantage of the online news information. In this paper, we aim to improve performance of the stock prices prediction, applying online news information, through the stock price prediction of individual companies. Based on this, in the future, it will be possible to find ways to increase the stock price prediction accuracy by complementing the problem of unnecessary words that are added to the sentiment dictionary.

Interpreting Bounded Rationality in Business and Industrial Marketing Contexts: Executive Training Case Studies (집행관배훈안례연구(阐述工商业背景下的有限合理性):집행관배훈안례연구(执行官培训案例研究))

  • Woodside, Arch G.;Lai, Wen-Hsiang;Kim, Kyung-Hoon;Jung, Deuk-Keyo
    • Journal of Global Scholars of Marketing Science
    • /
    • v.19 no.3
    • /
    • pp.49-61
    • /
    • 2009
  • This article provides training exercises for executives into interpreting subroutine maps of executives' thinking in processing business and industrial marketing problems and opportunities. This study builds on premises that Schank proposes about learning and teaching including (1) learning occurs by experiencing and the best instruction offers learners opportunities to distill their knowledge and skills from interactive stories in the form of goal.based scenarios, team projects, and understanding stories from experts. Also, (2) telling does not lead to learning because learning requires action-training environments should emphasize active engagement with stories, cases, and projects. Each training case study includes executive exposure to decision system analysis (DSA). The training case requires the executive to write a "Briefing Report" of a DSA map. Instructions to the executive trainee in writing the briefing report include coverage in the briefing report of (1) details of the essence of the DSA map and (2) a statement of warnings and opportunities that the executive map reader interprets within the DSA map. The length maximum for a briefing report is 500 words-an arbitrary rule that works well in executive training programs. Following this introduction, section two of the article briefly summarizes relevant literature on how humans think within contexts in response to problems and opportunities. Section three illustrates the creation and interpreting of DSA maps using a training exercise in pricing a chemical product to different OEM (original equipment manufacturer) customers. Section four presents a training exercise in pricing decisions by a petroleum manufacturing firm. Section five presents a training exercise in marketing strategies by an office furniture distributer along with buying strategies by business customers. Each of the three training exercises is based on research into information processing and decision making of executives operating in marketing contexts. Section six concludes the article with suggestions for use of this training case and for developing additional training cases for honing executives' decision-making skills. Todd and Gigerenzer propose that humans use simple heuristics because they enable adaptive behavior by exploiting the structure of information in natural decision environments. "Simplicity is a virtue, rather than a curse". Bounded rationality theorists emphasize the centrality of Simon's proposition, "Human rational behavior is shaped by a scissors whose blades are the structure of the task environments and the computational capabilities of the actor". Gigerenzer's view is relevant to Simon's environmental blade and to the environmental structures in the three cases in this article, "The term environment, here, does not refer to a description of the total physical and biological environment, but only to that part important to an organism, given its needs and goals." The present article directs attention to research that combines reports on the structure of task environments with the use of adaptive toolbox heuristics of actors. The DSA mapping approach here concerns the match between strategy and an environment-the development and understanding of ecological rationality theory. Aspiration adaptation theory is central to this approach. Aspiration adaptation theory models decision making as a multi-goal problem without aggregation of the goals into a complete preference order over all decision alternatives. The three case studies in this article permit the learner to apply propositions in aspiration level rules in reaching a decision. Aspiration adaptation takes the form of a sequence of adjustment steps. An adjustment step shifts the current aspiration level to a neighboring point on an aspiration grid by a change in only one goal variable. An upward adjustment step is an increase and a downward adjustment step is a decrease of a goal variable. Creating and using aspiration adaptation levels is integral to bounded rationality theory. The present article increases understanding and expertise of both aspiration adaptation and bounded rationality theories by providing learner experiences and practice in using propositions in both theories. Practice in ranking CTSs and writing TOP gists from DSA maps serves to clarify and deepen Selten's view, "Clearly, aspiration adaptation must enter the picture as an integrated part of the search for a solution." The body of "direct research" by Mintzberg, Gladwin's ethnographic decision tree modeling, and Huff's work on mapping strategic thought are suggestions on where to look for research that considers both the structure of the environment and the computational capabilities of the actors making decisions in these environments. Such research on bounded rationality permits both further development of theory in how and why decisions are made in real life and the development of learning exercises in the use of heuristics occurring in natural environments. The exercises in the present article encourage learning skills and principles of using fast and frugal heuristics in contexts of their intended use. The exercises respond to Schank's wisdom, "In a deep sense, education isn't about knowledge or getting students to know what has happened. It is about getting them to feel what has happened. This is not easy to do. Education, as it is in schools today, is emotionless. This is a huge problem." The three cases and accompanying set of exercise questions adhere to Schank's view, "Processes are best taught by actually engaging in them, which can often mean, for mental processing, active discussion."

  • PDF

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Shielding for Critical Organs and Radiation Exposure Dose Distribution in Patients with High Energy Radiotherapy (고 에너지 방사선치료에서 환자의 피폭선량 분포와 생식선의 차폐)

  • Chu, Sung-Sil;Suh, Chang-Ok;Kim, Gwi-Eon
    • Journal of Radiation Protection and Research
    • /
    • v.27 no.1
    • /
    • pp.1-10
    • /
    • 2002
  • High energy photon beams from medical linear accelerators produce large scattered radiation by various components of the treatment head, collimator and walls or objects in the treatment room including the patient. These scattered radiation do not provide therapeutic dose and are considered a hazard from the radiation safety perspective. Scattered dose of therapeutic high energy radiation beams are contributed significant unwanted dose to the patient. ICRP take the position that a dose of 500mGy may cause abortion at any stage of pregnancy and that radiation detriment to the fetus includes risk of mental retardation with a possible threshold in the dose response relationship around 100 mGy for the gestational period. The ICRP principle of as low as reasonably achievable (ALARA) was recommended for protection of occupation upon the linear no-threshold dose response hypothesis for cancer induction. We suggest this ALARA principle be applied to the fetus and testicle in therapeutic treatment. Radiation dose outside a photon treatment filed is mostly due to scattered photons. This scattered dose is a function of the distance from the beam edge, treatment geometry, primary photon energy, and depth in the patient. The need for effective shielding of the fetus and testicle is reinforced when young patients ate treated with external beam radiation therapy and then shielding designed to reduce the scattered photon dose to normal organs have to considered. Irradiation was performed in phantom using high energy photon beams produced by a Varian 2100C/D medical linear accelerator (Varian Oncology Systems, Palo Alto, CA) located at the Yonsei Cancer Center. The composite phantom used was comprised of a commercially available anthropomorphic Rando phantom (Phantom Laboratory Inc., Salem, YN) and a rectangular solid polystyrene phantom of dimensions $30cm{\times}30cm{\times}20cm$. the anthropomorphic Rando phantom represents an average man made from tissue equivalent materials that is transected into transverse 36 slices of 2.5cm thickness. Photon dose was measured using a Capintec PR-06C ionization chamber with Capintec 192 electrometer (Capintec Inc., Ramsey, NJ), TLD( VICTOREEN 5000. LiF) and film dosimetry V-Omat, Kodak). In case of fetus, the dosimeter was placed at a depth of loom in this phantom at 100cm source to axis distance and located centrally 15cm from the inferior edge of the $30cm{\times}30cm^2$ x-ray beam irradiating the Rando phantom chest wall. A acryl bridge of size $40cm{\times}40cm^2$ and a clear space of about 20 cm was fabricated and placed on top of the rectangular polystyrene phantom representing the abdomen of the patient. The leaf pot for testicle shielding was made as various shape, sizes, thickness and supporting stand. The scattered photon with and without shielding were measured at the representative position of the fetus and testicle. Measurement of radiation scattered dose outside fields and critical organs, like fetus position and testicle region, from chest or pelvic irradiation by large fie]d of high energy radiation beam was performed using an ionization chamber and film dosimetry. The scattered doses outside field were measured 5 - 10% of maximum doses in fields and exponentially decrease from field margins. The scattered photon dose received the fetus and testicle from thorax field irradiation was measured about 1 mGy/Gy of photon treatment dose. Shielding construction to reduce this scattered dose was investigated using lead sheet and blocks. Lead pot shield for testicle reduced the scatter dose under 10 mGy when photon beam of 60 Gy was irradiated in abdomen region. The scattered photon dose is reduced when the lead shield was used while the no significant reduction of scattered photon dose was observed and 2-3 mm lead sheets refuted the skin dose under 80% and almost electron contamination. The results indicate that it was possible to improve shielding to reduce scattered photon for fetus and testicle when a young patients were treated with a high energy photon beam.

Publication Report of the Asian-Australasian Journal of Animal Sciences over its History of 15 Years - A Review

  • Han, In K.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.15 no.1
    • /
    • pp.124-136
    • /
    • 2002
  • As an official journal of the Asian-Australasian Association of Animal Production Societies (AAAP), the Asian-Australasian Journal of Animal Sciences (AJAS) was born in February 1987 and the first issue (Volume 1, Number 1) was published in March 1988 under the Editorship of Professor In K. Han (Korea). By the end of 2001, a total of 84 issues in 14 volumes and 1,761 papers in 11,462 pages had been published. In addition to these 14 volumes, a special issue entitled "Recent Advances in Animal Nutrition" (April, 2000) and 3 supplements entitled "Proceedings of the 9th AAAP Animal Science Congress" (July, 2000) were also published. Publication frequency has steadily increased from 4 issues in 1988, to 6 issues in 1997 and to 12 issues in 2000. The total number of pages per volume and the number of original or review papers published also increased. Some significant milestones in the history of the AJAS include that (1) it became a Science Citation Index (SCI) journal in 1997, (2) the impact factor of the journal improved from 0.257 in 1999 to 0.446 in 2000, (3) it became a monthly journal (12 issues per volume) in 2000, (4) it adopted an English editing system in 1999, and (5) it has been covered in "Current Contents/Agriculture, Biology and Environmental Science since 2000. The AJAS is subscribed by 842 individuals or institutions. Annual subscription fees of US$ 50 (Category B) or US$ 70 (Category A) for individuals and US$ 70 (Category B) or US$ 120 (Category A) for institutions are much less than the actual production costs of US$ 130. A list of the 1,761 papers published in AJAS, listed according to subject area, may be found in the AJAS homepage (http://www.ajas.snu.ac.kr) and a very well prepared "Editorial Policy with Guide for Authors" is available in the Appendix of this paper. With regard to the submission status of manuscripts from AAAP member countries, India (235), Korea (235) and Japan (198) have submitted the most manuscripts. On the other hand, Mongolia, Nepal, and Papua New Guinea have never submitted any articles. The average time required from submission of a manuscript to printing in the AJAS has been reduced from 11 months in 1997-2000 to 7.8 months in 2001. The average rejection rate of manuscripts was 35.3%, a percentage slightly higher than most leading animal science journals. The total number of scientific papers published in the AJAS by AAAP member countries during a 14-year period (1988-2001) was 1,333 papers (75.7%) and that by non- AAAP member countries was 428 papers (24.3%). Japanese animal scientists have published the largest number of papers (397), followed by Korea (275), India (160), Bangladesh (111), Pakistan (85), Australia (71), Malaysia (59), China (53), Thailand (53), and Indonesia (34). It is regrettable that the Philippines (15), Vietnam (10), New Zealand (8), Nepal (2), Mongolia (0) and Papua New Guinea (0) have not actively participated in publishing papers in the AJAS. It is also interesting to note that the top 5 countries (Bangladesh, India, Japan, Korea and Pakistan) have published 1,028 papers in total indicating 77% of the total papers being published by AAAP animal scientists from Vol. 1 to 14 of the AJAS. The largest number of papers were published in the ruminant nutrition section (591 papers-44.3%), followed by the non-ruminant nutrition section (251 papers-18.8%), the animal reproduction section (153 papers-11.5%) and the animal breeding section (115 papers-8.6%). The largest portion of AJAS manuscripts was reviewed by Korean editors (44.3%), followed by Japanese editors (18.1%), Australian editors (6.0%) and Chinese editors (5.6%). Editors from the rest of the AAAP member countries have reviewed slightly less than 5% of the total AJAS manuscripts. It was regrettably noticed that editorial members representing Nepal (66.7%), Mongolia (50.0%), India (35.7%), Pakistan (25.0%), Papua New Guinea (25.0%), Malaysia (22.8%) and New Zealand (21.5%) have failed to return many of the manuscripts requested to be reviewed by the Editor-in-Chief. Financial records show that Korea has contributed the largest portion of production costs (68.5%), followed by Japan (17.3%), China (8.3%), and Australia (3.5%). It was found that 6 AAAP member countries have contributed less than 1% of the total production costs (Bangladesh, India, Indonesia, Malaysia, Papua New Guinea and Thailand), and another 6 AAAP member countries (Mongolia, Nepal and Pakistan, Philippine and Vietnam) have never provided any financial contribution in the form of subscriptions, page charges or reprints. It should be pointed out that most AAAP member countries have published more papers than their financial input with the exception of Korea and China. For example, Japan has published 29.8% of the total papers published in AJAS by AAAP member countries. However, Japan has contributed only 17.3% of total income. Similar trends could also be found in the case of Australia, Bangladesh, India, Indonesia, Malaysia and Thailand. A total of 12 Asian young animal scientists (under 40 years of age) have been awarded the AJAS-Purina Outstanding Research Award which was initiated in 1990 with a donation of US$ 2,000-3,000 by Mr. K. Y. Kim, President of Agribrands Purina Korea Inc. In order to improve the impact factor (citation frequency) and the financial structure of the AJAS, (1) submission of more manuscripts of good quality should be encouraged, (2) subscription rate of all AAAP member countries, especially Category B member countries should be dramatically increased, (3) a page charge policy and reprint ordering system should be applied to all AAAP member countries, and (4) all AAAP countries, especially Category A member countries should share more of the financial burden (advertisement revenue or support from public or private sector).

How Enduring Product Involvement and Perceived Risk Affect Consumers' Online Merchant Selection Process: The 'Required Trust Level' Perspective (지속적 관여도 및 인지된 위험이 소비자의 온라인 상인선택 프로세스에 미치는 영향에 관한 연구: 요구신뢰 수준 개념을 중심으로)

  • Hong, Il-Yoo B.;Lee, Jung-Min;Cho, Hwi-Hyung
    • Asia pacific journal of information systems
    • /
    • v.22 no.1
    • /
    • pp.29-52
    • /
    • 2012
  • Consumers differ in the way they make a purchase. An audio mania would willingly make a bold, yet serious, decision to buy a top-of-the-line home theater system, while he is not interested in replacing his two-decade-old shabby car. On the contrary, an automobile enthusiast wouldn't mind spending forty thousand dollars to buy a new Jaguar convertible, yet cares little about his junky component system. It is product involvement that helps us explain such differences among individuals in the purchase style. Product involvement refers to the extent to which a product is perceived to be important to a consumer (Zaichkowsky, 2001). Product involvement is an important factor that strongly influences consumer's purchase decision-making process, and thus has been of prime interest to consumer behavior researchers. Furthermore, researchers found that involvement is closely related to perceived risk (Dholakia, 2001). While abundant research exists addressing how product involvement relates to overall perceived risk, little attention has been paid to the relationship between involvement and different types of perceived risk in an electronic commerce setting. Given that perceived risk can be a substantial barrier to the online purchase (Jarvenpaa, 2000), research addressing such an issue will offer useful implications on what specific types of perceived risk an online firm should focus on mitigating if it is to increase sales to a fullest potential. Meanwhile, past research has focused on such consumer responses as information search and dissemination as a consequence of involvement, neglecting other behavioral responses like online merchant selection. For one example, will a consumer seriously considering the purchase of a pricey Guzzi bag perceive a great degree of risk associated with online buying and therefore choose to buy it from a digital storefront rather than from an online marketplace to mitigate risk? Will a consumer require greater trust on the part of the online merchant when the perceived risk of online buying is rather high? We intend to find answers to these research questions through an empirical study. This paper explores the impact of enduring product involvement and perceived risks on required trust level, and further on online merchant choice. For the purpose of the research, five types or components of perceived risk are taken into consideration, including financial, performance, delivery, psychological, and social risks. A research model has been built around the constructs under consideration, and 12 hypotheses have been developed based on the research model to examine the relationships between enduring involvement and five components of perceived risk, between five components of perceived risk and required trust level, between enduring involvement and required trust level, and finally between required trust level and preference toward an e-tailer. To attain our research objectives, we conducted an empirical analysis consisting of two phases of data collection: a pilot test and main survey. The pilot test was conducted using 25 college students to ensure that the questionnaire items are clear and straightforward. Then the main survey was conducted using 295 college students at a major university for nine days between December 13, 2010 and December 21, 2010. The measures employed to test the model included eight constructs: (1) enduring involvement, (2) financial risk, (3) performance risk, (4) delivery risk, (5) psychological risk, (6) social risk, (7) required trust level, (8) preference toward an e-tailer. The statistical package, SPSS 17.0, was used to test the internal consistency among the items within the individual measures. Based on the Cronbach's ${\alpha}$ coefficients of the individual measure, the reliability of all the variables is supported. Meanwhile, the Amos 18.0 package was employed to perform a confirmatory factor analysis designed to assess the unidimensionality of the measures. The goodness of fit for the measurement model was satisfied. Unidimensionality was tested using convergent, discriminant, and nomological validity. The statistical evidences proved that the three types of validity were all satisfied. Now the structured equation modeling technique was used to analyze the individual paths along the relationships among the research constructs. The results indicated that enduring involvement has significant positive relationships with all the five components of perceived risk, while only performance risk is significantly related to trust level required by consumers for purchase. It can be inferred from the findings that product performance problems are mostly likely to occur when a merchant behaves in an opportunistic manner. Positive relationships were also found between involvement and required trust level and between required trust level and online merchant choice. Enduring involvement is concerned with the pleasure a consumer derives from a product class and/or with the desire for knowledge for the product class, and thus is likely to motivate the consumer to look for ways of mitigating perceived risk by requiring a higher level of trust on the part of the online merchant. Likewise, a consumer requiring a high level of trust on the merchant will choose a digital storefront rather than an e-marketplace, since a digital storefront is believed to be trustworthier than an e-marketplace, as it fulfills orders by itself rather than acting as an intermediary. The findings of the present research provide both academic and practical implications. The first academic implication is that enduring product involvement is a strong motivator of consumer responses, especially the selection of a merchant, in the context of electronic shopping. Secondly, academicians are advised to pay attention to the finding that an individual component or type of perceived risk can be used as an important research construct, since it would allow one to pinpoint the specific types of risk that are influenced by antecedents or that influence consequents. Meanwhile, our research provides implications useful for online merchants (both online storefronts and e-marketplaces). Merchants may develop strategies to attract consumers by managing perceived performance risk involved in purchase decisions, since it was found to have significant positive relationship with the level of trust required by a consumer on the part of the merchant. One way to manage performance risk would be to thoroughly examine the product before shipping to ensure that it has no deficiencies or flaws. Secondly, digital storefronts are advised to focus on symbolic goods (e.g., cars, cell phones, fashion outfits, and handbags) in which consumers are relatively more involved than others, whereas e- marketplaces should put their emphasis on non-symbolic goods (e.g., drinks, books, MP3 players, and bike accessories).

  • PDF

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.