• Title/Summary/Keyword: Produced By

Search Result 25,994, Processing Time 0.056 seconds

The Origin of Records and Archives in the United States and the Formation of Archival System: Focusing on the Period from the Early 17th Century to the Mid 20th (미국의 기록(records) 및 아카이브즈(archives)의 역사적 기원과 관리·보존의 역사 17세기 초부터 20세기 중반까지를 중심으로)

  • Lee, Seon Ok
    • The Korean Journal of Archival Studies
    • /
    • no.80
    • /
    • pp.43-88
    • /
    • 2024
  • The National Archives and Records Administration (NARA) is a relatively quiet latecomer to the traditional archives of the Western world. Although the United States lacks a long history of organized public records·archives management, it has developed a modern system optimized for the American historical context. This system focuses on the systematic management and preservation of the vast amount of modern records produced and collected during the tumultuous 20th century. As a result, NARA has established a modern archival system that is optimized for the American historical context. The U.S. public records·archives management system is based on the principle that records·archives are the property of the American people and belong to the public. This concept originated during the British colonial era when records were used to safeguard the rights of the colonies as self-governing citizens. For Americans, records and archives have long been a symbol of the nation's identity, serving as a means of protecting individual freedoms, rights, and democracy throughout the country's history. It is natural, therefore, that American life and history should be documented, and that the recorded past should be managed and preserved for the nation's present and future. The public records·archives management system in the United States is the result of a convergence of theories, practices, lessons learned, and ideas that have been shaped by the country's history, philosophies, and values about records, and its unique experience with records management. This paper traces the origins of records and archives in the United States in a historical context to understand the organic relationship between American life and records. It examines the process of forming a modern public records management system that is both uniquely American and universal to the American context without falling into the two forms of traditions that reflect the uniqueness of American history.

Establishment of Test Conditions and Interlaboratory Comparison Study of Neuro-2a Assay for Saxitoxin Detection (Saxitoxin 검출을 위한 Neuro-2a 시험법 조건 확립 및 실험실 간 변동성 비교 연구)

  • Youngjin Kim;Jooree Seo;Jun Kim;Jeong-In Park;Jong Hee Kim;Hyun Park;Young-Seok Han;Youn-Jung Kim
    • Journal of Marine Life Science
    • /
    • v.9 no.1
    • /
    • pp.9-21
    • /
    • 2024
  • Paralytic shellfish poisoning (PSP) including Saxitoxin (STX) is caused by harmful algae, and poisoning occurs when the contaminated seafood is consumed. The mouse bioassay (MBA), a standard test method for detecting PSP, is being sanctioned in many countries due to its low detection limit and the animal concerns. An alternative to the MBA is the Neuro-2a cell-based assay. This study aimed to establish various test conditions for Neuro-2a assay, including cell density, culture conditions, and STX treatment conditions, to suit the domestic laboratory environment. As a result, the initial cell density was set to 40,000 cells/well and the incubation time to 24 hours. Additionally, the concentration of Ouabain and Veratridine (O/V) was set to 500/50 μM, at which most cells died. In this study, we identified eight concentrations of STX, ranging from 368 to 47,056 fg/μl, which produced an S-shaped dose-response curve when treated with O/V. Through inter-laboratory variability comparison of the Neuro-2a assay, we established five Quality Control Criteria to verify the appropriateness of the experiments and six Data Criteria (Top and Bottom OD, EC50, EC20, Hill slop, and R2 of graph) to determine the reliability of the experimental data. The Neuro-2a assay conducted under the established conditions showed an EC50 value of approximately 1,800~3,500 fg/μl. The intra- & inter-lab variability comparison results showed that the coefficients of variation (CVs) for the Quality Control and Data values ranged from 1.98% to 29.15%, confirming the reproducibility of the experiments. This study presented Quality Control Criteria and Data Criteria to assess the appropriateness of the experiments and confirmed the excellent repeatability and reproducibility of the Neuro-2a assay. To apply the Neuro-2a assay as an alternative method for detecting PSP in domestic seafood, it is essential to establish a toxin extraction method from seafood and toxin quantification methods, and perform correlation analysis with MBA and instrumental analysis methods.

An Analysis of the Internal Marketing Impact on the Market Capitalization Fluctuation Rate based on the Online Company Reviews from Jobplanet (직원을 위한 내부마케팅이 기업의 시가 총액 변동률에 미치는 영향 분석: 잡플래닛 기업 리뷰를 중심으로)

  • Kichul Choi;Sang-Yong Tom Lee
    • Information Systems Review
    • /
    • v.20 no.2
    • /
    • pp.39-62
    • /
    • 2018
  • Thanks to the growth of computing power and the recent development of data analytics, researchers have started to work on the data produced by users through the Internet or social media. This study is in line with these recent research trends and attempts to adopt data analytical techniques. We focus on the impact of "internal marketing" factors on firm performance, which is typically studied through survey methodologies. We looked into the job review platform Jobplanet (www.jobplanet.co.kr), which is a website where employees and former employees anonymously review companies and their management. With web crawling processes, we collected over 40K data points and performed morphological analysis to classify employees' reviews for internal marketing data. We then implemented econometric analysis to see the relationship between internal marketing and market capitalization. Contrary to the findings of extant survey studies, internal marketing is positively related to a firm's market capitalization only within a limited area. In most of the areas, the relationships are negative. Particularly, female-friendly environment and human resource development (HRD) are the areas exhibiting positive relations with market capitalization in the manufacturing industry. In the service industry, most of the areas, such as employ welfare and work-life balance, are negatively related with market capitalization. When firm size is small (or the history is short), female-friendly environment positively affect firm performance. On the contrary, when firm size is big (or the history is long), most of the internal marketing factors are either negative or insignificant. We explain the theoretical contributions and managerial implications with these results.

Development and validation of the Kkondae tendency scale (꼰대경향성 척도 개발 및 타당화)

  • Ji Hyun Jung;Jin Kook Tak
    • The Korean Journal of Coaching Psychology
    • /
    • v.7 no.3
    • /
    • pp.153-196
    • /
    • 2023
  • The purpose of this study is to development and validate kkondae tendency scale. Kkondae tendencies are defined as "a response pattern to others in a way that values authority in social relationships, is self-centered, and does not accept other people's opinions," and the subjects of the study are workers aged 19 or older who act as seniors, seniors, and bosses in the workplace. In Study 1, 65 preliminary questions were produced with 7 factors for the compositional concept of kkondae tendency through literature review, expert interviews, and open questionnaire survey. In Study 2, a preliminary survey was conducted with 65 questions derived from Study 1. Exploratory factor analysis was conducted based on the responses of a total of 395 people, and 22 items for 4 factors were derived. In Study 3, this survey was conducted with 22 questions derived from Study 2. A total of 880 responses were analyzed, and cross-validation verification was conducted by dividing the data into two groups (Group 1 and Group 2). Exploratory factor analysis was conducted on Group 1 (N=429) to derive 19 items with 4 factors. The four factors are authoritarianism(3 items), egocentrism (5 items), inertial thinking (5 itemss), and one-sided communication (6 items). A confirmatory factor analysis was conducted on 19 questions obtained from Group 1 for Group 2 (N = 451), and 19 questions of four factors were accepted due to the good fit of the model. To verify the convergent validity of the Kkondae tendency scale, the correlation with the Kkondae scale was examined, and to verify the criterion-related validity, the relationship between self-reflection, relationship conflict, social connectedness was examined. All were statistically significant, and convergence validity and criterion-related validity were verified. Finally, discussions on the process and results of this study, differences from related measures, academic significance, practical implications, limitations of the study, and future research directions were presented.

Contactless Data Society and Reterritorialization of the Archive (비접촉 데이터 사회와 아카이브 재영토화)

  • Jo, Min-ji
    • The Korean Journal of Archival Studies
    • /
    • no.79
    • /
    • pp.5-32
    • /
    • 2024
  • The Korean government ranked 3rd among 193 UN member countries in the UN's 2022 e-Government Development Index. Korea, which has consistently been evaluated as a top country, can clearly be said to be a leading country in the world of e-government. The lubricant of e-government is data. Data itself is neither information nor a record, but it is a source of information and records and a resource of knowledge. Since administrative actions through electronic systems have become widespread, the production and technology of data-based records have naturally expanded and evolved. Technology may seem value-neutral, but in fact, technology itself reflects a specific worldview. The digital order of new technologies, armed with hyper-connectivity and super-intelligence, not only has a profound influence on traditional power structures, but also has an a similar influence on existing information and knowledge transmission media. Moreover, new technologies and media, including data-based generative artificial intelligence, are by far the hot topic. It can be seen that the all-round growth and spread of digital technology has led to the augmentation of human capabilities and the outsourcing of thinking. This also involves a variety of problems, ranging from deep fakes and other fake images, auto profiling, AI lies hallucination that creates them as if they were real, and copyright infringement of machine learning data. Moreover, radical connectivity capabilities enable the instantaneous sharing of vast amounts of data and rely on the technological unconscious to generate actions without awareness. Another irony of the digital world and online network, which is based on immaterial distribution and logical existence, is that access and contact can only be made through physical tools. Digital information is a logical object, but digital resources cannot be read or utilized without some type of device to relay it. In that respect, machines in today's technological society have gone beyond the level of simple assistance, and there are points at which it is difficult to say that the entry of machines into human society is a natural change pattern due to advanced technological development. This is because perspectives on machines will change over time. Important is the social and cultural implications of changes in the way records are produced as a result of communication and actions through machines. Even in the archive field, what problems will a data-based archive society face due to technological changes toward a hyper-intelligence and hyper-connected society, and who will prove the continuous activity of records and data and what will be the main drivers of media change? It is time to research whether this will happen. This study began with the need to recognize that archives are not only records that are the result of actions, but also data as strategic assets. Through this, author considered how to expand traditional boundaries and achieves reterritorialization in a data-driven society.

Satisfaction Evaluation of Diabetic Foot Disease Measurement using AI-based Application (AI기반 에플리케이션을 활용한 당뇨병성 족부질환 측정의 만족도 평가)

  • Hyeun-Woo Choi;Hyo-jin Lee;Min-jeong Kim;Jong-Min Lee;Dong-hyun Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.4
    • /
    • pp.327-334
    • /
    • 2024
  • The purpose of this study is to develop a customized foot disease analysis and management system for diabetic patients to prevent foot ulcers in diabetic foot disease patients. This system utilizes image analysis technology to measure not only foot pressure, but also ankle deformation, body balance, and foot wounds. Through various data, it is possible to accurately analyze the state of foot deformation, and based on this, the exact state of deformation of the foot of a patient with diabetic foot disease was identified and a customized insole was produced. This study was conducted to examine the satisfaction level of using an application that checks the status of diabetic foot disease wounds and to identify the degenerative status of diabetic foot disease patients and foot disease patients by wearing customized insoles and to survey the satisfaction of wearing insoles. As a result of the study, the knee angle measured for plantar pressure was -0.8 ± 1.3 degrees and ranged from a minimum of -2.4 degrees to a maximum of 1.1 degrees, and there was no significant difference in valgus knee between both lower extremities (p = 0.534). There was a significant difference in tibial angle between both lower extremities (p < 0.001). Ankle angle on the left side was 2.6 ± 2.0 degrees, ranging from a minimum of 0 degrees to a maximum of 6.3 degrees, and on the right, it was 4.5 ± 2.1 degrees, with a distribution of minimum 1.5 degrees to a maximum of 9.1 degrees. There was a significant difference in ankle angle between both lower extremities (p = 0.011). They responded that they felt an average of 4.3 points of satisfaction with the plantar pressure measurement application. Respondents responded that they felt an average of 3.9 points of satisfaction with the use of customized insoles.

A Study on the Optimal Process Parameters for Recycling of Electric Arc Furnace Dust (EAFD) by Rotary Kiln (Rotary Kiln에 의한 전기로 제강분진(EAFD)의 재활용을 위한 최적의 공정변수에 관한 연구)

  • Jae-hong Yoon;Chi-hyun Yoon;Myoung-won Lee
    • Resources Recycling
    • /
    • v.33 no.4
    • /
    • pp.47-61
    • /
    • 2024
  • As a recycling technology for recovering zinc contained in large amounts in electric arc furnace dust (EAFD), the most commercialized technology in the world is the Wealz Kiln Process. The Wealz Kiln Process is a process in which components such as Zn and Pb in EAFD are reduced/volatile (endothermic reaction) in high-temperature Kiln and then re-oxidized (exothermic reaction) in the gas phase and recovered in the form of Crude zinc oxide (60wt%Zn) in the Bag Filter installed at the rear end of Kiln. In this study, an experimental Wealz kiln was produced to investigate the optimal process variable value for practical application to the recycling process of large-scale kiln on a commercial scale. Additionally, Pellets containing EAFD, reducing agents, and limestone were continuously loaded into Kiln, and the amount of input, heating temperature, and residence time were examined to obtain the optimal crude zinc oxide recovery rate. In addition, the optimal manufacturing conditions of Pellets (drum tilt angle, moisture addition, mixing time, etc.) were also investigated. In addition, referring to the SiO2-CaO-FeO ternary system diagram, the formation behavior of a low melting point compound, a reaction product inside Kiln according to the change in the basicity of Pellet, and the reactivity (adhesion) with the castable constructed on the inner wall of Kiln were investigated. In addition, in order to quantitatively investigate the possibility of using anthracite as a substitute for Coke, a reducing agent, changes in the temperature distribution inside Kiln, where oxidation/reduction reactions occur due to an increase in the amount of anthracite, the quality of Crude zinc oxide, and the behavior of tar in anthracite were also investigated.

A Study of Microbial Contamination in Fresh-Cut and Ready-to-Eat Foods Purchased from Online Markets (온라인 판매 신선편의식품 및 즉석섭취식품의 미생물 오염도 연구)

  • Hye-Sun Hwang;Jae-Hoon Jeong;Young-Hee Kwon;Ye-Jee Byun;Ji-Young Park;Ho-Cheol Yun
    • Journal of Food Hygiene and Safety
    • /
    • v.39 no.4
    • /
    • pp.335-342
    • /
    • 2024
  • This study aimed to examine the delivery conditions and microbial contamination of fresh-cut and ready-to-eat foods purchased from online markets between February and November 2023. Upon arrival, the average surface temperature of the products was 11.3℃. In the fresh-cut foods, the average number of total aerobic bacteria and coliforms was 4.5 log colony-forming units (CFU)/g and 1.2 log CFU/g, respectively, whereas in the ready-to-eat foods, these values were 10.6 log CFU/g and 1.2 log CFU/g, respectively. Pathogens, such as Staphylococcus aureus, Salmonella spp., Clostridium perfringens, Listeria monocytogenes, and pathogenic Escherichia coli were absent from all samples. Bacillus cereus was found in 2.7% of the fresh-cut foods and 0.9% of the ready-to-eat foods, with contamination levels averaging 0.05 log CFU/g and 0.01 log CFU/g, respectively. In the four samples in which B. cereus was detected, genetic testing of the six toxin genes produced by B. cereus revealed the presence of at least one enterotoxin gene, excluding the emetic toxin. L. monocytogenes was absent from ready-to-eat foods but was detected in 0.9% of fresh-cut foods. Analysis of the isolated L. monocytogenes confirmed the presence of six pathogenicity-related genes, including iap, indicating the potential risk of foodborne diseases.

Analysis of Greenhouse Thermal Environment by Model Simulation (시뮬레이션 모형에 의한 온실의 열환경 분석)

  • 서원명;윤용철
    • Journal of Bio-Environment Control
    • /
    • v.5 no.2
    • /
    • pp.215-235
    • /
    • 1996
  • The thermal analysis by mathematical model simulation makes it possible to reasonably predict heating and/or cooling requirements of certain greenhouses located under various geographical and climatic environment. It is another advantages of model simulation technique to be able to make it possible to select appropriate heating system, to set up energy utilization strategy, to schedule seasonal crop pattern, as well as to determine new greenhouse ranges. In this study, the control pattern for greenhouse microclimate is categorized as cooling and heating. Dynamic model was adopted to simulate heating requirements and/or energy conservation effectiveness such as energy saving by night-time thermal curtain, estimation of Heating Degree-Hours(HDH), long time prediction of greenhouse thermal behavior, etc. On the other hand, the cooling effects of ventilation, shading, and pad ||||&|||| fan system were partly analyzed by static model. By the experimental work with small size model greenhouse of 1.2m$\times$2.4m, it was found that cooling the greenhouse by spraying cold water directly on greenhouse cover surface or by recirculating cold water through heat exchangers would be effective in greenhouse summer cooling. The mathematical model developed for greenhouse model simulation is highly applicable because it can reflects various climatic factors like temperature, humidity, beam and diffuse solar radiation, wind velocity, etc. This model was closely verified by various weather data obtained through long period greenhouse experiment. Most of the materials relating with greenhouse heating or cooling components were obtained from model greenhouse simulated mathematically by using typical year(1987) data of Jinju Gyeongnam. But some of the materials relating with greenhouse cooling was obtained by performing model experiments which include analyzing cooling effect of water sprayed directly on greenhouse roof surface. The results are summarized as follows : 1. The heating requirements of model greenhouse were highly related with the minimum temperature set for given greenhouse. The setting temperature at night-time is much more influential on heating energy requirement than that at day-time. Therefore It is highly recommended that night- time setting temperature should be carefully determined and controlled. 2. The HDH data obtained by conventional method were estimated on the basis of considerably long term average weather temperature together with the standard base temperature(usually 18.3$^{\circ}C$). This kind of data can merely be used as a relative comparison criteria about heating load, but is not applicable in the calculation of greenhouse heating requirements because of the limited consideration of climatic factors and inappropriate base temperature. By comparing the HDM data with the results of simulation, it is found that the heating system design by HDH data will probably overshoot the actual heating requirement. 3. The energy saving effect of night-time thermal curtain as well as estimated heating requirement is found to be sensitively related with weather condition: Thermal curtain adopted for simulation showed high effectiveness in energy saving which amounts to more than 50% of annual heating requirement. 4. The ventilation performances doting warm seasons are mainly influenced by air exchange rate even though there are some variations depending on greenhouse structural difference, weather and cropping conditions. For air exchanges above 1 volume per minute, the reduction rate of temperature rise on both types of considered greenhouse becomes modest with the additional increase of ventilation capacity. Therefore the desirable ventilation capacity is assumed to be 1 air change per minute, which is the recommended ventilation rate in common greenhouse. 5. In glass covered greenhouse with full production, under clear weather of 50% RH, and continuous 1 air change per minute, the temperature drop in 50% shaded greenhouse and pad & fan systemed greenhouse is 2.6$^{\circ}C$ and.6.1$^{\circ}C$ respectively. The temperature in control greenhouse under continuous air change at this time was 36.6$^{\circ}C$ which was 5.3$^{\circ}C$ above ambient temperature. As a result the greenhouse temperature can be maintained 3$^{\circ}C$ below ambient temperature. But when RH is 80%, it was impossible to drop greenhouse temperature below ambient temperature because possible temperature reduction by pad ||||&|||| fan system at this time is not more than 2.4$^{\circ}C$. 6. During 3 months of hot summer season if the greenhouse is assumed to be cooled only when greenhouse temperature rise above 27$^{\circ}C$, the relationship between RH of ambient air and greenhouse temperature drop($\Delta$T) was formulated as follows : $\Delta$T= -0.077RH+7.7 7. Time dependent cooling effects performed by operation of each or combination of ventilation, 50% shading, pad & fan of 80% efficiency, were continuously predicted for one typical summer day long. When the greenhouse was cooled only by 1 air change per minute, greenhouse air temperature was 5$^{\circ}C$ above outdoor temperature. Either method alone can not drop greenhouse air temperature below outdoor temperature even under the fully cropped situations. But when both systems were operated together, greenhouse air temperature can be controlled to about 2.0-2.3$^{\circ}C$ below ambient temperature. 8. When the cool water of 6.5-8.5$^{\circ}C$ was sprayed on greenhouse roof surface with the water flow rate of 1.3 liter/min per unit greenhouse floor area, greenhouse air temperature could be dropped down to 16.5-18.$0^{\circ}C$, whlch is about 1$0^{\circ}C$ below the ambient temperature of 26.5-28.$0^{\circ}C$ at that time. The most important thing in cooling greenhouse air effectively with water spray may be obtaining plenty of cool water source like ground water itself or cold water produced by heat-pump. Future work is focused on not only analyzing the feasibility of heat pump operation but also finding the relationships between greenhouse air temperature(T$_{g}$ ), spraying water temperature(T$_{w}$ ), water flow rate(Q), and ambient temperature(T$_{o}$).

  • PDF

A Study on Intelligent Value Chain Network System based on Firms' Information (기업정보 기반 지능형 밸류체인 네트워크 시스템에 관한 연구)

  • Sung, Tae-Eung;Kim, Kang-Hoe;Moon, Young-Su;Lee, Ho-Shin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.67-88
    • /
    • 2018
  • Until recently, as we recognize the significance of sustainable growth and competitiveness of small-and-medium sized enterprises (SMEs), governmental support for tangible resources such as R&D, manpower, funds, etc. has been mainly provided. However, it is also true that the inefficiency of support systems such as underestimated or redundant support has been raised because there exist conflicting policies in terms of appropriateness, effectiveness and efficiency of business support. From the perspective of the government or a company, we believe that due to limited resources of SMEs technology development and capacity enhancement through collaboration with external sources is the basis for creating competitive advantage for companies, and also emphasize value creation activities for it. This is why value chain network analysis is necessary in order to analyze inter-company deal relationships from a series of value chains and visualize results through establishing knowledge ecosystems at the corporate level. There exist Technology Opportunity Discovery (TOD) system that provides information on relevant products or technology status of companies with patents through retrievals over patent, product, or company name, CRETOP and KISLINE which both allow to view company (financial) information and credit information, but there exists no online system that provides a list of similar (competitive) companies based on the analysis of value chain network or information on potential clients or demanders that can have business deals in future. Therefore, we focus on the "Value Chain Network System (VCNS)", a support partner for planning the corporate business strategy developed and managed by KISTI, and investigate the types of embedded network-based analysis modules, databases (D/Bs) to support them, and how to utilize the system efficiently. Further we explore the function of network visualization in intelligent value chain analysis system which becomes the core information to understand industrial structure ystem and to develop a company's new product development. In order for a company to have the competitive superiority over other companies, it is necessary to identify who are the competitors with patents or products currently being produced, and searching for similar companies or competitors by each type of industry is the key to securing competitiveness in the commercialization of the target company. In addition, transaction information, which becomes business activity between companies, plays an important role in providing information regarding potential customers when both parties enter similar fields together. Identifying a competitor at the enterprise or industry level by using a network map based on such inter-company sales information can be implemented as a core module of value chain analysis. The Value Chain Network System (VCNS) combines the concepts of value chain and industrial structure analysis with corporate information simply collected to date, so that it can grasp not only the market competition situation of individual companies but also the value chain relationship of a specific industry. Especially, it can be useful as an information analysis tool at the corporate level such as identification of industry structure, identification of competitor trends, analysis of competitors, locating suppliers (sellers) and demanders (buyers), industry trends by item, finding promising items, finding new entrants, finding core companies and items by value chain, and recognizing the patents with corresponding companies, etc. In addition, based on the objectivity and reliability of the analysis results from transaction deals information and financial data, it is expected that value chain network system will be utilized for various purposes such as information support for business evaluation, R&D decision support and mid-term or short-term demand forecasting, in particular to more than 15,000 member companies in Korea, employees in R&D service sectors government-funded research institutes and public organizations. In order to strengthen business competitiveness of companies, technology, patent and market information have been provided so far mainly by government agencies and private research-and-development service companies. This service has been presented in frames of patent analysis (mainly for rating, quantitative analysis) or market analysis (for market prediction and demand forecasting based on market reports). However, there was a limitation to solving the lack of information, which is one of the difficulties that firms in Korea often face in the stage of commercialization. In particular, it is much more difficult to obtain information about competitors and potential candidates. In this study, the real-time value chain analysis and visualization service module based on the proposed network map and the data in hands is compared with the expected market share, estimated sales volume, contact information (which implies potential suppliers for raw material / parts, and potential demanders for complete products / modules). In future research, we intend to carry out the in-depth research for further investigating the indices of competitive factors through participation of research subjects and newly developing competitive indices for competitors or substitute items, and to additively promoting with data mining techniques and algorithms for improving the performance of VCNS.