• Title/Summary/Keyword: digital information gap

Search Result 208, Processing Time 0.031 seconds

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Comparison of Three Kinds of Methods on Estimation of Forest Carbon Stocks Distribution Using National Forest Inventory DB and Forest Type Map (국가산림자원조사 DB와 임상도를 이용한 산림탄소저장량 공간분포 추정방법 비교)

  • Kim, Kyoung-Min;Roh, Young-Hee;Kim, Eun-Sook
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.4
    • /
    • pp.69-85
    • /
    • 2014
  • Carbon stocks of NFI plots can be accurately estimated using field survey information. However, an accurate estimation of carbon stocks in other unsurveyed sites is very difficult. In order to fill this gap, various spatial information can be used as an ancillary data. In South Korea, there is the 1:5,000 forest type map that was produced by digital air-photo interpretation and field survey. Because this map contains very detailed forest information, it can be used as the high-quality spatial data for estimating carbon stocks. In this study, we compared three upscaling methods based on the 1:5,000 forest type map and 5th national forest inventory data. Map algebra(method 1), RK(Regression Kriging)(method 2), and GWR(Geographically Weighted Regression)(method 3) were applied to estimate forest carbon stock in Chungcheong-nam Do and Daejeon metropolitan city. The range of carbon stocks from method 2(1.39~138.80 tonC/ha) and method 3(1.28~149.98 tonC/ha) were more similar to that of previous method(1.56~156.40 tonC/ha) than that of method 1(0.00~93.37 tonC/ha). This result shows that RK and GWR considering spatial autocorrelation can show spatial heterogeneity of carbon stocks. We carried out paired t-test for carbon stock data using 186 sample points to assess estimation accuracy. As a result, the average carbon stocks of method 2 and field survey method were not significantly different at p=0.05 using paired t-test. And the result of method 2 showed the lowest RMSE. Therefore regression kriging method is useful to consider spatial variations of carbon stocks distribution in rugged terrain and complex forest stand.

The Comparative Analysis of Exposure Conditions between F/S and C/R System for an Ideal Image in Simple Abdomen (복부 단순촬영의 이상적 영상구현을 위한 F. S system과 C.R system의 촬영조건 비교분석)

  • Son, Sang-Hyuk;Song, Young-Geun;Kim, Je-Bong
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.9 no.1
    • /
    • pp.37-43
    • /
    • 2007
  • 1. Purpose : This study is to present effective exposure conditions to acquire the best image of simple abdomen in Film Screen (F.S) system and Computed Radiography (C.R) system. 2. Method : In the F.S system, while an exposure condition was fixed as 70kVp, images of a patients simple abdomen were taken under the different mAs exposure conditions. Among these images, the best one was chosen by radiologists and radiological technologists. In the C.R system, the best image of the same patient was acquired with the same method from the F.S system. Both characteristic curves from F.S system and C.R system were analyzed. 3. Results : In the F.S system, the best exposure condition of simple abdomen was 70kVp and 20mAs. In the CR system, with the fixed condition at 70kVp, the image densities of human organs, such as liver, kidney, spleen, psoas muscle, lumbar spine body and iliac crest, were almost same despite different environments (3.2mAs, 8mAs, 12mAs, 16mAs and 20mAs). However, when the exposure conditions were over or under (below) 12mAs, the images between the abdominal wall and the directly exposed part became blurred because the gap of density was decreased. In the C.R system, while the volume of mAs was decreased, an artifact of quantum mottle was increased. 4. Conclusion : This study shows that the exposure condition in the C.R system can be reduced 40% than in the F.S system. This paper concluded that when the exposure conditions are set in CR environment, after the analysis of equipment character, such as image processing system(EDR : Exposure Data Recognition processing), PACS and so on, the high quality of image with maximum information can be acquired with a minimum exposure dose.

  • PDF

Patients with brain metastases the usefulness of contrast-enhanced FLAIR images after delay (뇌전이 환자의 조영 증강 후 지연 FLAIR 영상의 유용성)

  • Byun, Jae-Hu;Park, Myung-Hwan;Lee, Jin-Wan
    • Korean Journal of Digital Imaging in Medicine
    • /
    • v.16 no.1
    • /
    • pp.13-19
    • /
    • 2014
  • Purpose: FLAIR image is beneficial for the diagnosis of various bran diseases including ischemic CVS, brain tumors and infections. However the border between the legion of brain metastasis and surrounding edema may not be clear. Therefore, this study aims to investigate the practical benefits of delayed imaging by comparing the image from a patient with brain metastasis before a contrast enhancement and the image 10 minutes after a contrast enhancement. Materials and methods: Of the 92 people who underwent MRI brain metastases in suspected patients 13 people in three patients there is no video to target the 37 people confirmed cases, and motion artifacts brain metastases in our hospital June-December 2013, 18 people measurement position except for the three incorrect patient (male: 11 people, female: 7 people, average age: 60 years) in the target, test equipment, 3.0T MR System (ACHIEVA Release, Philips, I was 8ChannelSENSE Head Coil use Best, and the Netherlands). TR 11000 ms, TE 125 ms, TI2800 ms, Slice Thickness 5 mm, gap 5 mm, is a Slice number 21, the parameters of the 3D FFE, T2 FLAIR variable that was used to test, TR 8.1 ms, TE 3.7 ms, Slice number 240 I set to. The experiment was conducted by acquiring the FLAIR prior to contrast enhancement (heretofore referred to as Pre FLAIR), and acquiring the 3D FFE CE five minutes after the contrast enhancement, and recomposing the images in an axial plane of S/T 3mm, G 0mm (heretofore referred to as MPR TRA CE). Using the FLAIR 10 minutes after the contrast enhancement (heretofore referred to as Post FLAIR) and Pi-View, a retrospective study was conducted. Using MRIcro on the image of a patient confirmed for his diagnosis, the images before and after the contrast media, as well as the CNR and SNR of the MPR TRA CE images of the lesion and the site absent of lesion were compared and analyzed using a one-way analysis of variance. Results: CNR for Pre FLAIR and Post FLAIR were 34.35 and 60.13, respectively, with MPR TRA CE at 23.77 showing no significant difference (p<0.050). Post-experiment analysis shows a difference between Pre FLAIR and Post FLAIR in terms of CNR (p<0.050), but no difference in CNR between Post FLAIR and MPR TRA CE (p>0.050), indicating that the contrast media had an effect only on Pre FLAIR and Post FLAIR. The SNR for the normal site Pre FLAIR was 106.43, and for the lesion site 140.79. Post FLAIR for the normal site was 107.79, and for the lesion site 167.91. MPR TRA CE for the normal site was 140.23 and for the lesion site 183.19, showing significant difference (p<0.050), and post-experiment analysis shows that there was a difference in SNR only on the lesion sites for Pre FLAIR and Post FLAIR (p<0.050). There was no difference in SNR between the normal site and lesion site for Post FLAIR and MPR TRA CE, indicating no effect from the contrast media (p>0.050). Conclusions: This experiment shows that Post FLAIR has a higher contrast than Pre FLAIR, and a higher SNR for lesions, It was not not statistically significant and MPR TRA CE but CNR came out high. Inspection of post-contrast which is used in a high magnetic field is frequently used images of 3D T1 but, since the signal of the contrast medium and the blood flow is included, this method can be diagnostic accuracy is reduced, it is believed that when used in combination with Post FLAIR, and that can provide video information added to the diagnosis of brain metastases.

  • PDF

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

The Mediating Role of Perceived Risk in the Relationships Between Enduring Product Involvement and Trust Expectation (지속적 제품관여도와 소비자 요구신뢰수준 간의 영향관계: 인지된 위험의 매개 역할에 대한 실증분석을 중심으로)

  • Hong, Ilyoo B.;Kim, Taeha;Cha, Hoon S.
    • Asia pacific journal of information systems
    • /
    • v.23 no.4
    • /
    • pp.103-128
    • /
    • 2013
  • When a consumer needs a product or service and multiple sellers are available online, the process of selecting a seller to buy online from is complex since the process involves many behavioral dimensions that have to be taken into account. As a part of this selection process, consumers may set minimum trust expectation that can be used to screen out less trustworthy sellers. In the previous research, the level of consumers' trust expectation has been anchored on two important factors: product involvement and perceived risk. Product involvement refers to the extent to which a consumer perceives a specific product important. Thus, the higher product involvement may result in the higher trust expectation in sellers. On the other hand, other related studies found that when consumers perceived a higher level of risk (e.g., credit card fraud risk), they set higher trust expectation as well. While abundant research exists addressing the relationship between product involvement and perceived risk, little attention has been paid to the integrative view of the link between the two constructs and their impacts on the trust expectation. The present paper is a step toward filling this research gap. The purpose of this paper is to understand the process by which a consumer chooses an online merchant by examining the relationships among product involvement, perceived risk, trust expectation, and intention to buy from an e-tailer. We specifically focus on the mediating role of perceived risk in the relationships between enduring product involvement and the trust expectation. That is, we question whether product involvement affects the trust expectation directly without mediation or indirectly mediated by perceived risk. The research model with four hypotheses was initially tested using data gathered from 635 respondents through an online survey method. The structural equation modeling technique with partial least square was used to validate the instrument and the proposed model. The results showed that three out of the four hypotheses formulated were supported. First, we found that the intention to buy from a digital storefront is positively and significantly influenced by the trust expectation, providing support for H4 (trust expectation ${\rightarrow}$ purchase intention). Second, perceived risk was found to be a strong predictor of trust expectation, supporting H2 as well (perceived risk ${\rightarrow}$ trust expectation). Third, we did not find any evidence of direct influence of product involvement, which caused H3 to be rejected (product involvement ${\rightarrow}$ trust expectation). Finally, we found significant positive relationship between product involvement and perceived risk (H1: product involvement ${\rightarrow}$ perceived risk), which suggests that the possibility of complete mediation of perceived risk in the relationship between enduring product involvement and the trust expectation. As a result, we conducted an additional test for the mediation effect by comparing the original model with the revised model without the mediator variable of perceived risk. Indeed, we found that there exists a strong influence of product involvement on the trust expectation (by intentionally eliminating the variable of perceived risk) that was suppressed (i.e., mediated) by the perceived risk in the original model. The Sobel test statistically confirmed the complete mediation effect. Results of this study offer the following key findings. First, enduring product involvement is positively related to perceived risk, implying that the higher a consumer is enduringly involved with a given product, the greater risk he or she is likely to perceive with regards to the online purchase of the product. Second, perceived risk is positively related to trust expectation. A consumer with great risk perceptions concerning the online purchase is likely to buy from a highly trustworthy online merchant, thereby mitigating potential risks. Finally, product involvement was found to have no direct influence on trust expectation, but the relationship between the two constructs was indirect and mediated by the perceived risk. This is perhaps an important theoretical integration of two separate streams of literature on product involvement and perceived risk. The present research also provides useful implications for practitioners as well as academicians. First, one implication for practicing managers in online retail stores is that they should invest in reducing the perceived risk of consumers in order to lower down the trust expectation and thus increasing the consumer's intention to purchase products or services. Second, an academic implication is that perceived risk mediates the relationship between enduring product involvement and trust expectation. Further research is needed to elaborate the theoretical relationships among the constructs under consideration.

An Exploratory Study on the Competition Patterns Between Internet Sites in Korea (한국 인터넷사이트들의 산업별 경쟁유형에 대한 탐색적 연구)

  • Park, Yoonseo;Kim, Yongsik
    • Asia Marketing Journal
    • /
    • v.12 no.4
    • /
    • pp.79-111
    • /
    • 2011
  • Digital economy has grown rapidly so that the new business area called 'Internet business' has been dramatically extended as time goes on. However, in the case of Internet business, market shares of individual companies seem to fluctuate very extremely. Thus marketing managers who operate the Internet sites have seriously observed the competition structure of the Internet business market and carefully analyzed the competitors' behavior in order to achieve their own business goals in the market. The newly created Internet business might differ from the offline ones in management styles, because it has totally different business circumstances when compared with the existing offline businesses. Thus, there should be a lot of researches for finding the solutions about what the features of Internet business are and how the management style of those Internet business companies should be changed. Most marketing literatures related to the Internet business have focused on individual business markets. Specifically, many researchers have studied the Internet portal sites and the Internet shopping mall sites, which are the most general forms of Internet business. On the other hand, this study focuses on the entire Internet business industry to understand the competitive circumstance of online market. This approach makes it possible not only to have a broader view to comprehend overall e-business industry, but also to understand the differences in competition structures among Internet business markets. We used time-series data of Internet connection rates by consumers as the basic data to figure out the competition patterns in the Internet business markets. Specifically, the data for this research was obtained from one of Internet ranking sites, 'Fian'. The Internet business ranking data is obtained based on web surfing record of some pre-selected sample group where the possibility of double-count for page-views is controlled by method of same IP check. The ranking site offers several data which are very useful for comparison and analysis of competitive sites. The Fian site divides the Internet business areas into 34 area and offers market shares of big 5 sites which are on high rank in each category daily. We collected the daily market share data about Internet sites on each area from April 22, 2008 to August 5, 2008, where some errors of data was found and 30 business area data were finally used for our research after the data purification. This study performed several empirical analyses in focusing on market shares of each site to understand the competition among sites in Internet business of Korea. We tried to perform more statistically precise analysis for looking into business fields with similar competitive structures by applying the cluster analysis to the data. The research results are as follows. First, the leading sites in each area were classified into three groups based on averages and standard deviations of daily market shares. The first group includes the sites with the lowest market shares, which give more increased convenience to consumers by offering the Internet sites as complimentary services for existing offline services. The second group includes sites with medium level of market shares, where the site users are limited to specific small group. The third group includes sites with the highest market shares, which usually require online registration in advance and have difficulty in switching to another site. Second, we analyzed the second place sites in each business area because it may help us understand the competitive power of the strongest competitor against the leading site. The second place sites in each business area were classified into four groups based on averages and standard deviations of daily market shares. The four groups are the sites showing consistent inferiority compared to the leading sites, the sites with relatively high volatility and medium level of shares, the sites with relatively low volatility and medium level of shares, the sites with relatively low volatility and high level of shares whose gaps are not big compared to the leading sites. Except 'web agency' area, these second place sites show relatively stable shares below 0.1 point of standard deviation. Third, we also classified the types of relative strength between leading sites and the second place sites by applying the cluster analysis to the gap values of market shares between two sites. They were also classified into four groups, the sites with the relatively lowest gaps even though the values of standard deviation are various, the sites with under the average level of gaps, the sites with over the average level of gaps, the sites with the relatively higher gaps and lower volatility. Then we also found that while the areas with relatively bigger gap values usually have smaller standard deviation values, the areas with very small differences between the first and the second sites have a wider range of standard deviation values. The practical and theoretical implications of this study are as follows. First, the result of this study might provide the current market participants with the useful information to understand the competitive circumstance of the market and build the effective new business strategy for the market success. Also it might be useful to help new potential companies find a new business area and set up successful competitive strategies. Second, it might help Internet marketing researchers take a macro view of the overall Internet market so that make possible to begin the new studies on overall Internet market beyond individual Internet market studies.

  • PDF

The Effect of Common Features on Consumer Preference for a No-Choice Option: The Moderating Role of Regulatory Focus (재몰유선택적정황하공동특성대우고객희호적영향(在没有选择的情况下共同特性对于顾客喜好的影响): 조절초점적조절작용(调节焦点的调节作用))

  • Park, Jong-Chul;Kim, Kyung-Jin
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.89-97
    • /
    • 2010
  • This study researches the effects of common features on a no-choice option with respect to regulatory focus theory. The primary interest is in three factors and their interrelationship: common features, no-choice option, and regulatory focus. Prior studies have compiled vast body of research in these areas. First, the "common features effect" has been observed bymany noted marketing researchers. Tversky (1972) proposed the seminal theory, the EBA model: elimination by aspect. According to this theory, consumers are prone to focus only on unique features during comparison processing, thereby dismissing any common features as redundant information. Recently, however, more provocative ideas have attacked the EBA model by asserting that common features really do affect consumer judgment. Chernev (1997) first reported that adding common features mitigates the choice gap because of the increasing perception of similarity among alternatives. Later, however, Chernev (2001) published a critically developed study against his prior perspective with the proposition that common features may be a cognitive load to consumers, and thus consumers are possible that they are prone to prefer the heuristic processing to the systematic processing. This tends to bring one question to the forefront: Do "common features" affect consumer choice? If so, what are the concrete effects? This study tries to answer the question with respect to the "no-choice" option and regulatory focus. Second, some researchers hold that the no-choice option is another best alternative of consumers, who are likely to avoid having to choose in the context of knotty trade-off settings or mental conflicts. Hope for the future also may increase the no-choice option in the context of optimism or the expectancy of a more satisfactory alternative appearing later. Other issues reported in this domain are time pressure, consumer confidence, and alternative numbers (Dhar and Nowlis 1999; Lin and Wu 2005; Zakay and Tsal 1993). This study casts the no-choice option in yet another perspective: the interactive effects between common features and regulatory focus. Third, "regulatory focus theory" is a very popular theme in recent marketing research. It suggests that consumers have two focal goals facing each other: promotion vs. prevention. A promotion focus deals with the concepts of hope, inspiration, achievement, or gain, whereas prevention focus involves duty, responsibility, safety, or loss-aversion. Thus, while consumers with a promotion focus tend to take risks for gain, the same does not hold true for a prevention focus. Regulatory focus theory predicts consumers' emotions, creativity, attitudes, memory, performance, and judgment, as documented in a vast field of marketing and psychology articles. The perspective of the current study in exploring consumer choice and common features is a somewhat creative viewpoint in the area of regulatory focus. These reviews inspire this study of the interaction possibility between regulatory focus and common features with a no-choice option. Specifically, adding common features rather than omitting them may increase the no-choice option ratio in the choice setting only to prevention-focused consumers, but vice versa to promotion-focused consumers. The reasoning is that when prevention-focused consumers come in contact with common features, they may perceive higher similarity among the alternatives. This conflict among similar options would increase the no-choice ratio. Promotion-focused consumers, however, are possible that they perceive common features as a cue of confirmation bias. And thus their confirmation processing would make their prior preference more robust, then the no-choice ratio may shrink. This logic is verified in two experiments. The first is a $2{\times}2$ between-subject design (whether common features or not X regulatory focus) using a digital cameras as the relevant stimulus-a product very familiar to young subjects. Specifically, the regulatory focus variable is median split through a measure of eleven items. Common features included zoom, weight, memory, and battery, whereas the other two attributes (pixel and price) were unique features. Results supported our hypothesis that adding common features enhanced the no-choice ratio only to prevention-focus consumers, not to those with a promotion focus. These results confirm our hypothesis - the interactive effects between a regulatory focus and the common features. Prior research had suggested that including common features had a effect on consumer choice, but this study shows that common features affect choice by consumer segmentation. The second experiment was used to replicate the results of the first experiment. This experimental study is equal to the prior except only two - priming manipulation and another stimulus. For the promotion focus condition, subjects had to write an essay using words such as profit, inspiration, pleasure, achievement, development, hedonic, change, pursuit, etc. For prevention, however, they had to use the words persistence, safety, protection, aversion, loss, responsibility, stability etc. The room for rent had common features (sunshine, facility, ventilation) and unique features (distance time and building state). These attributes implied various levels and valence for replication of the prior experiment. Our hypothesis was supported repeatedly in the results, and the interaction effects were significant between regulatory focus and common features. Thus, these studies showed the dual effects of common features on consumer choice for a no-choice option. Adding common features may enhance or mitigate no-choice, contradictory as it may sound. Under a prevention focus, adding common features is likely to enhance the no-choice ratio because of increasing mental conflict; under the promotion focus, it is prone to shrink the ratio perhaps because of a "confirmation bias." The research has practical and theoretical implications for marketers, who may need to consider common features carefully in a practical display context according to consumer segmentation (i.e., promotion vs. prevention focus.) Theoretically, the results suggest some meaningful moderator variable between common features and no-choice in that the effect on no-choice option is partly dependent on a regulatory focus. This variable corresponds not only to a chronic perspective but also a situational perspective in our hypothesis domain. Finally, in light of some shortcomings in the research, such as overlooked attribute importance, low ratio of no-choice, or the external validity issue, we hope it influences future studies to explore the little-known world of the "no-choice option."