• Title/Summary/Keyword: 결합 학습

Search Result 1,073, Processing Time 0.024 seconds

From a Defecation Alert System to a Smart Bottle: Understanding Lean Startup Methodology from the Case of Startup "L" (배변알리미에서 스마트바틀 출시까지: 스타트업 L사 사례로 본 린 스타트업 실천방안)

  • Sunkyung Park;Ju-Young Park
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.5
    • /
    • pp.91-107
    • /
    • 2023
  • Lean startup is a concept that combines the words "lean," meaning an efficient way of running a business, and "startup," meaning a new business. It is often cited as a strategy for minimizing failure in early-stage businesses, especially in software-based startups. By scrutinizing the case of a startup L, this study suggests that lean startup methodology(LSM) can be useful for hardware and manufacturing companies and identifies ways for early startups to successfully implement LSM. To this end, the study explained the core of LSM including the concepts of hypothesis-driven approach, BML feedback loop, minimum viable product(MVP), and pivot. Five criteria to evaluate the successful implementation of LSM were derived from the core concepts and applied to evaluate the case of startup L . The early startup L pivoted its main business model from defecation alert system for patients with limited mobility to one for infants or toddlers, and finally to a smart bottle for infants. In developing the former two products, analyzed from LSM's perspective, company L neither established a specific customer value proposition for its startup idea and nor verified it through MVP experiment, thus failed to create a BML feedback loop. However, through two rounds of pivots, startup L discovered new target customers and customer needs, and was able to establish a successful business model by repeatedly experimenting with MVPs with minimal effort and time. In other words, Company L's case shows that it is essential to go through the customer-market validation stage at the beginning of the business, and that it should be done through an MVP method that does not waste the startup's time and resources. It also shows that it is necessary to abandon and pivot a product or service that customers do not want, even if it is technically superior and functionally complete. Lastly, the study proves that the lean startup methodology is not limited to the software industry, but can also be applied to technology-based hardware industry. The findings of this study can be used as guidelines and methodologies for early-stage companies to minimize failures and to accelerate the process of establishing a business model, scaling up, and going global.

  • PDF

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Legal Issues on the Collection and Utilization of Infectious Disease Data in the Infectious Disease Crisis (감염병 위기 상황에서 감염병 데이터의 수집 및 활용에 관한 법적 쟁점 -미국 감염병 데이터 수집 및 활용 절차를 참조 사례로 하여-)

  • Kim, Jae Sun
    • The Korean Society of Law and Medicine
    • /
    • v.23 no.4
    • /
    • pp.29-74
    • /
    • 2022
  • As social disasters occur under the Disaster Management Act, which can damage the people's "life, body, and property" due to the rapid spread and spread of unexpected COVID-19 infectious diseases in 2020, information collected through inspection and reporting of infectious disease pathogens (Article 11), epidemiological investigation (Article 18), epidemiological investigation for vaccination (Article 29), artificial technology, and prevention policy Decision), (3) It was used as an important basis for decision-making in the context of an infectious disease crisis, such as promoting vaccination and understanding the current status of damage. In addition, medical policy decisions using infectious disease data contribute to quarantine policy decisions, information provision, drug development, and research technology development, and interest in the legal scope and limitations of using infectious disease data has increased worldwide. The use of infectious disease data can be classified for the purpose of spreading and blocking infectious diseases, prevention, management, and treatment of infectious diseases, and the use of information will be more widely made in the context of an infectious disease crisis. In particular, as the serious stage of the Disaster Management Act continues, the processing of personal identification information and sensitive information becomes an important issue. Information on "medical records, vaccination drugs, vaccination, underlying diseases, health rankings, long-term care recognition grades, pregnancy, etc." needs to be interpreted. In the case of "prevention, management, and treatment of infectious diseases", it is difficult to clearly define the concept of medical practicesThe types of actions are judged based on "legislative purposes, academic principles, expertise, and social norms," but the balance of legal interests should be based on the need for data use in quarantine policies and urgent judgment in public health crises. Specifically, the speed and degree of transmission of infectious diseases in a crisis, whether the purpose can be achieved without processing sensitive information, whether it unfairly violates the interests of third parties or information subjects, and the effectiveness of introducing quarantine policies through processing sensitive information can be used as major evaluation factors. On the other hand, the collection, provision, and use of infectious disease data for research purposes will be used through pseudonym processing under the Personal Information Protection Act, consent under the Bioethics Act and deliberation by the Institutional Bioethics Committee, and data provision deliberation committee. Therefore, the use of research purposes is recognized as long as procedural validity is secured as it is reviewed by the pseudonym processing and data review committee, the consent of the information subject, and the institutional bioethics review committee. However, the burden on research managers should be reduced by clarifying the pseudonymization or anonymization procedures, the introduction or consent procedures of the comprehensive consent system and the opt-out system should be clearly prepared, and the procedure for re-identifying or securing security that may arise from technological development should be clearly defined.