인공지능 알고리즘은 사람을 차별하는가?

Does Artificial Intelligence Algorithm Discriminate Certain Groups of Humans?

  • 오요한 (RPI 과학기술학과) ;
  • 홍성욱 (서울대학교 과학사 및 과학철학 협동과정/생명과학부)
  • 투고 : 2018.10.14
  • 심사 : 2018.11.19
  • 발행 : 2018.11.30

초록

빅데이터에 근거하여 자동적인 의사결정을 내리는 알고리즘이 사회의 각종 영역에서 점차 널리 사용되고 있는 저변에는 알고리즘의 의사결정이 사회의 자원을 보다 효율적으로 분배하리라는 기대 뿐만 아니라 그 결정이 선입견, 편향, 자의적 판단 등이 개입될 수 있는 인간의 의사결정보다 더 공정한 결과를 낳으리라는 희망 또한 자리잡고 있다. 하지만 알고리즘 의사결정이 그 결정에 의해 영향 받는 이들을 공정하게 다루지 않는다는 주장이 여러 사례와 함께 거듭 제기되면서, 의사결정이 어떻게 절차화되었는지, 또한 특정한 의사결정을 공정하다고 판단하는 데에 어떤 요인이 고려되는지에 대한 근본적인 질문들이 새롭게 제기되고 있다. 본 논문은 사법, 치안, 국가 안보의 세 가지 알고리즘 활용 영역에서 차별의 문제가 제기되는 상황을 구체적으로 분석한 연구들을 검토함으로써, 인공지능 알고리즘이 과연 특정 집단의 인간을 차별하는지, 그리고 공정한 의사결정을 분별하는 기준은 무엇인지 살펴보고자 한다. 본격적인 검토에 앞서 데이터 마이닝 각 단계에서 의도적으로 그리고 비의도적으로 편향적인 결과가 산출될 수 있는 원인에는 무엇이 있는지를 살필 것이다. 결론에서는 이러한 이론적이고 실질적인 검토가 현대 한국 사회에 시사하는 바가 무엇인지 간추려 제시할 것이다.

The contemporary practices of Big-Data based automated decision making algorithms are widely deployed not just because we expect algorithmic decision making might distribute social resources in a more efficient way but also because we hope algorithms might make fairer decisions than the ones humans make with their prejudice, bias, and arbitrary judgment. However, there are increasingly more claims that algorithmic decision making does not do justice to those who are affected by the outcome. These unfair examples bring about new important questions such as how decision making was translated into processes and which factors should be considered to constitute to fair decision making. This paper attempts to delve into a bunch of research which addressed three areas of algorithmic application: criminal justice, law enforcement, and national security. By doing so, it will address some questions about whether artificial intelligence algorithm discriminates certain groups of humans and what are the criteria of a fair decision making process. Prior to the review, factors in each stage of data mining that could, either deliberately or unintentionally, lead to discriminatory results will be discussed. This paper will conclude with implications of this theoretical and practical analysis for the contemporary Korean society.

키워드

참고문헌

  1. 강준만 (2014), 왜 흑인이 사는 빈곤층 거주 지역에 붉은 줄을 긋는가? - redlining, 인문학은 언어에서 태어났다, 서울 : 인물과사상사, 288-290쪽.
  2. 김기태.정재관 (2018), 알고리즘이 이용자에게 미친 영향과 방법론적 쟁점: 알고리즘 편향(bias)과 개인정보 보호(privacy protection)를 중심으로, 정보통신정책연구원, 사이버 커뮤니케이션학회 공동주최 지능정보화 이용자 보호 학술 세미나 '알고리즘 시대 이용자 연구 어떻게 할 것인가' 자료집 (2018.6.14), 17-35쪽.
  3. 김도훈 (2018), 알고리즘 책임성 논의와 알고리즘에 대한 이해, 정보통신기술진흥센터 (IITP), 주간기술동향, 제16권, 제1호, 제1848호 (2018. 5. 30). 14-28쪽. [http://www.itfind.or.kr/WZIN/jugidong/1848/file4136408682380886323-184802.pdf]
  4. 안중호.양지윤 (2006), 기업 거버넌스 측면에서의 IT 거버넌스, 경영정보논총, 제16권, 제1호, 97-119쪽.
  5. 에릭 브린욜프슨, 앤드루 맥아피, 이한음 번역 (2014), 제2의 기계 시대 - 인간과 기계의 공생이 시작된다, 서울 : 청림출판. [Brynjolfsson, E., and A. McAfee (2016), The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, New York, NY: W. W. Norton & Company.]
  6. 오요한 (2018), 알고리즘 조사 활동과 그 제약사항의 설정: 네이버 실시간급상승검색어 알고리즘에 대한 검증 논쟁을 중심으로, 서울대학교 대학원 석사학위논문.
  7. 정용찬 (2015). 빅데이터 산업과 데이터 브로커, KISDI Premium Report, 15-04. 진천: 정보통신정책연구원. 1-23쪽.
  8. 캐시 오닐, 김정혜 번역 (2017), 대량살상 수학무기: 어떻게 빅데이터는 불평등을 확산하고 민주주의를 위협하는가, 서울 : 흐름출판. [O'Neil, C. (2016), Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, New York, NY: Crown.]
  9. 프랭크 파스콸레, 이시은 번역 (2016), 블랙박스 사회: 당신의 모든것이 수집되고 있다, 안양 : 안티고네. [Pasquale, F. (2015), The Black Box Society: The secret algorithms that control money and information. Cambridge, MA: Harvard University Press.]
  10. 필립 K. 딕, 조호근 번역 (2015), 마이너리티 리포트, 서울 : 폴라북스 [Dick, P. K. and Triptree Jr., J. (2002), The Minority Report and Other Classic Stories, New York, NY: Citadel.]
  11. ACM (Association for Computing Machinery) U.S. Public Policy Council an d ACM Europe Policy Committee (2017), 'Statement on Algori thmic Transparency and Accountability', Accessed on: Oct. 14, 2018. [Online]. Available: https://www.acm.org/binaries/content/assets/public-policy/2017_joint_statement_algorithms.pdf
  12. Adey, P. (2009), 'Facing Airport Security: Affect, Biopolitics, and the Preemptive Securitisation of the Mobile Body', Environment and Planning D: Society and Space, Vol. 27, No. 2, pp. 274-295. https://doi.org/10.1068/d0208
  13. Amoore, L. (2006), 'Biometric Borders: Governing Mobilities in the War on Terror', Political Geography, Vol.25, No. 3, pp. 336-51. https://doi.org/10.1016/j.polgeo.2006.02.001
  14. Amoore, L. (2009a), 'Algorithmic War: Everyday Geographies of the War on Terror', Antipode, Vol. 41, No.1, pp. 49-69. https://doi.org/10.1111/j.1467-8330.2008.00655.x
  15. Amoore, L. (2009b), 'Lines of Sight: on the Visualization of Unknown Futures', Citizenship Studies, Vol 13, No. 1, pp. 17-30. https://doi.org/10.1080/13621020802586628
  16. Amoore, L. and Hall, A. (2009), 'Taking People Apart: Digitised Dissection and the Body at the Border', Environment and Planning D: Society and Space, Vol. 27, No. 3, pp. 444-464. https://doi.org/10.1068/d1208
  17. Amoore, L. and de Goede, M. (2005), 'Governance, Risk and Dataveillance in the War on Terror', Crime, Law and Social Change, Vol. 43, No. 2-3, pp. 149-173. https://doi.org/10.1007/s10611-005-1717-8
  18. Amoore L. and Raley, R. (2016), 'Securing with Algorithms: Knowledge, Decision, Sovereignty', Security Dialogue, Vol. 48, No. 1, pp.3-10.
  19. Angwin, J. and Larson, J. (2015), 'The Tiger Mom Tax: Asians Are Nearly Twice as Likely to Get a Higher Price from Princeton Review', ProPublica. [ https://www.propublica.org/article/asians-nearly-twiceas-likely-to-get-higher-price-from-princeton-review]
  20. Angwin, J. and Larwon, J. (2016), 'Bias in Criminal Risk Scores is Mathematically Inevitable, Researchers Say', ProPublica. [ https://www.propublica.org/article/bias-in-criminal-risk-scores-is-mathematically-inevitable-researchers-say]
  21. Angwin, J., Larson, J., Mattu, S. and Kirchner, L. (2016.05.23.), 'Machine Bias There's software used across the country to predict future criminals. And it's biased against blacks', ProPublica. [https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing]
  22. Angwin, J., and Larson, J. (2016.07.29), 'ProPublica Responds to Company's Critique of Machine Bias Story', ProPublica. [https://www.propublica.org/article/propublica-responds-to-companys-critique-of-machine-bias-story]
  23. Aradau, C. and Blanke, T. (2016), 'Politics of Prediction: Security and the Time/Space of Governmentality in the age of Big Data', European Journal of Social Theory, Vol. 20, No. 3, pp. 373-391.
  24. Aradau, C. and Blanke, T. (2018), 'Governing Others: Anomaly and the Algorithmic Subject of Security', European Journal of International Security, Vol. 3, No.1, pp. 1-21.
  25. Asaro, P. M. (2013), 'The Labor of Surveillance and Bureaucratized Killing: New Subjectivities of Military Drone Operators', Social Semiotics, Vol. 23, No. 2, pp. 196-224. https://doi.org/10.1080/10350330.2013.777591
  26. Barocas, S. and Selbst, A. D. (2016), 'Big Data's Disparate Impact', California Law Review, Vol. 104, No. 3, pp. 671-732.
  27. Barrett, L. (2017), 'Reasonably Suspicious Algorithms: Predictive Policing at the United States Border', NYU Review of Law & Social Change, Vol. 41, pp. 327-363.
  28. Beckett, K., Nyrop, K., & Pfingst, L. (2006), 'Race, drugs, and policing: Understanding disparities in drug delivery arrests.' Criminology, Vol. 44, No. 1, pp. 105-137. https://doi.org/10.1111/j.1745-9125.2006.00044.x
  29. Berk, R. Hoda, H. Shahin, J. Michael K. and Aaron R, (2018), 'Fairness in Criminal Justice Risk Assessments: The State of the Art', Sociological Methods & Research, Vol. 20, No. 10, pp. 1-42.
  30. Burrell, J. (2016), 'How the machine 'thinks': Understanding opacity in machine learning algorithms.' Big Data & Society, Vol. 3, No. 1.
  31. Ceyhan, A. (2008), 'Technologization of Security: Management of Uncertainty and Risk in the Age of Biometrics', Surveillance & Society, Vol. 5, No. 2, pp. 102-123.
  32. Chakraborty, S., Tomsett, R., Raghavendra, R., Harborne, D., Alzantot, M., Cerutti, F., Srivastavaz, M., Preecey, A., Julieryy, S., Rao, R. M., Kelley, T. D., Brainesx, D., Sensoyk, M., Willis, C. J., & Gurram, P. (2017), 'Interpretability of deep learning models: a survey of results.' In IEEE Smart World Congress 2017 Workshop: DAIS.
  33. Constantiou, I. D. and Kallinikos, J. (2015), 'New Games, New Rules: Big Data and the Changing Context of Strategy', Journal of Information Technology, Vol. 30, No. 1, pp. 44-57. https://doi.org/10.1057/jit.2014.17
  34. Corbett-Davies, S. Pierson, E. Feller, A. Goel, S. and Hug, A. (2017), 'Algo rithmic Decision Making and Cost of Fairness', KDD '17 (Kno wledge Discovery and Data mining). [https://arxiv.org/abs/1701.08230]
  35. Crawford, K. (2013.05.10), 'Think Again: Big Data - Why the Rise of Machines isn't All it's Cracked Up to Be', Foreign Policy. [https://foreignpolicy.com/2013/05/10/think-again-big-data]
  36. Curtis, N. (2016), 'The Explication of the Social: Algorithms, Drones and(counter-) Terror', Journal of Sociology, Vol. 52, No.3, pp. 522-536. https://doi.org/10.1177/1440783316654265
  37. d'Alessandro, B., O'Neil, C., & LaGatta, T. (2017), 'Conscientious Classification: A Data Scientist's Guide to Discrimination-Aware Classification.' Big data, Vol. 5, No. 2, pp. 120-134. https://doi.org/10.1089/big.2016.0048
  38. Dastin, J. (2018), 'Amazon scraps secret AI recruiting tool that showed bias against women,' Reuters, (2018. 10. 9) https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
  39. de Goede, M., Stephanie S., and Hoijtink, M. (2014), 'Performing preemption', Security Dialogue, Vol. 45, pp. 411-422. https://doi.org/10.1177/0967010614543585
  40. Dennis, B. (2007), 'The New Digital Borders of Europe: EU Databases and the Surveillance of Irregular Migrants', International Sociology, Vol. 22, No. 1, pp. 71-92. https://doi.org/10.1177/0268580907070126
  41. Der Derian, J. (2009), Virtuous war: Mapping the military-industrial-mediaentertainment-network. Routledge.
  42. Dieterich, W., Mendoza, C., and Brennan, T. (2016), 'COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity', North pointe Inc. Research Department. Published by Northpointe (2016. 7. 8.). [http://www.northpointeinc.com/northpointe-analysis]
  43. Dijstelbloem, H., Meijer, A., and Brom, F. (2011), 'Reclaiming Control over Europe's Technological Borders', in H. Dijstelbloem and A. Meijer eds., Migration and the new Technological Borders of Europe, Palgrave Macmillan, pp. 170-185.
  44. Dourish, P. (2016), 'Algorithms and Their Others: Algorithmic Culture in Context', Big Data & Society, Vol 3, No. 2, pp. 1-11.
  45. Dressel, J. and H. Farid. (2018), 'The Accuracy, Fairness, and Limits of Predicting Recidivism', Science Advances, Vol. 4, No.1 [DOI: 10.1126/sciadv.aao5580].
  46. Ferguson, A. G. (2015), 'Big Data and Predictive Reasonable Suspicion', University of Pennsylvania Law Review, Vol. 163, No. 2, pp. 327-410.
  47. Ferguson, A. G. (2017), The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. NYU Press.
  48. Follis, K. S. (2017), 'Vision and Transterritory: The Borders of Europe', Science, Technology, & Human Values. Vol. 42, No. 6, pp. 1003-1030. https://doi.org/10.1177/0162243917715106
  49. Foucault, M. (2003), Society Must be Defended: Lectures at the C'ollege de France New York: Picador.
  50. Friedler, S., Scheidegger, C., and Venkatasubramanian, S. (2016), 'On the (im)possibility of Fairness', [https://arxiv.org/abs/1609.07236]
  51. Gelman, A., Fagan, J., & Kiss, A. (2007), 'An analysis of the New York City police department's 'stop-and-frisk' policy in the context of claims of racial bias.' Journal of the American Statistical Association, Vol. 102, No. 479, pp. 813-823. https://doi.org/10.1198/016214506000001040
  52. Gibbs, S. (2015.07.08), 'Women less likely to be Shown Ads for High-paid Jobs on Google, Study Shows', The Guardian. [https://www.theguardian.com/technology/2015/jul/08/women-less-likely-ads-high-paid-jobs-google-study]
  53. Gillespie, T. (2014), 'The Relevance of Algorithms', in T. Gillespie, Boczkowski, P.J., and Foot, K.A. eds., Media Technologies: Essays on Communication, Materiality, and Society, pp. 167-193, MIT Press.
  54. Glouftsios, G. (2018), 'Governing Circulation Through Technology within EU Border Security Practice-networks', Mobilities, Vol. 13, No.2, pp. 185-199. https://doi.org/10.1080/17450101.2017.1403774
  55. Greene, D., Hoffmann, A. L., and Stark, L. (2019), 'Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning', The 52nd Annual Hawaii International Conference on System Sciences (HICSS), Maui, HI.
  56. Hacking I. (1995), 'The looping effect of human kinds', In Sperber. D. Premack, D ., and Premack, A. J. eds., Causal cognition: a multidisciplinary debate, Oxford: Oxford University Press, pp. 351-83.
  57. Hand, D. J. (2006), 'Classifier Technology and the Illusion of Progress', Statistical science, Vol. 21, No.1, pp. 1-14. https://doi.org/10.1214/088342306000000060
  58. Hardt, M., Price, E., and Srebro, N. (2016), 'Equality of Opportunity in Supervised Learning', Advances in Neural Information Processing Systems 29 (NIPS 2016), [https://arxiv.org/abs/1610.02413]
  59. The IEEE (Institute of Electrical and Electronics Engineers) Global Initiative on Ethics of Autonomous and Intelligent Systems (2017), 'Ethic ally aligned design: A vision for prioritizing human well-being w ith autonomous and intelligent systems', version 2. Accessed on: Oct. 14, 2018. [http://standards.ieee.org/develop/indconn/ec/ead_v2.pdf]
  60. Jasanoff, S. (2017), "Science and democracy", in Hackett, E. J., Amsterdamska, O., Lynch, M., and Wajcman, J. Eds. The handbook of science and technology studies, The Fourth Edition, pp. 259-288, Cambridge, MA: MIT Press.
  61. Jeandesboz, J. (2014), 'EU Border Control: Violence, Capture and Apparatus', In Jansen, Y., Celikates, R., and De Bloois, J. eds., The Irregularization of Migration in Europe, London: Rowman & Littlefield International. pp. 87-103,
  62. Kahn, C. (2011.11.26.), 'At LAPD, Predicting Crimes Before They Happen', National Public Radio. [https://www.npr.org/2011/11/26/142758000/at-lapd-predicting-crimes-before-they-happen]
  63. Kirchner, L. (2017.12.18), 'New York City Moves to Create Accountability for Algorithms', Propublica [https://www.propublica.org/article/new-york-city-moves-to-create-accountability-for-algorithms]
  64. Kleinberg, J., Mullainathan, S., and Raghavan, M. (2016.09.11), 'Inherent Trade-Offs in the Fair Determination of Risk Scores', in Proc. 8th Conf. on Innovations in Theoretical Computer Science (ITCS), 2017. [https://arxiv.org/abs/1609.05807]
  65. Kroll, J., Huey, J., Barocas, S., Felten, E., Reidenberg, J., Robinson, D., and Yu, H. (2017), 'Accountable algorithms', University of Pennsylvania Law Review, Vol. 165, No.3, pp. 633-705.
  66. Kuhn, T. (1962), The Structure of Scientific Revolutions, Chicago: University of Chicago Press.
  67. Larson, J. and Angwin, L. (2016.07.29), 'Technical Response to Northpointe', ProPublica. [https://www.propublica.org/article/technical-response-to-northpointe]
  68. Leeang, M. M., Rutjes, A. W., Reitsma, J. B., Hooft, L., and Bossuyt, P. M. (2013), 'Variation of a Test's Sensitivity and Specicity with Disease Prevalence', Canadian Medical Association Journal, Vol. 185, No. 11, pp. 537-544. https://doi.org/10.1503/cmaj.121286
  69. Leslie, S. W. (1993), The Cold War and American science: The military-industrial-academic complex at MIT and Stanford. Columbia University Press;
  70. Liu, H. and Motoda., H. (2012), Feature selection for knowledge discovery and data mining. Springer Science & Business Media.
  71. Liu, Y. (2017.01.17), 'The Accountability of AI - Case Study: Microsoft's Tay Experiment', Medium. [https://chatbotslife.com/the-accountability-of-ai-case-study-microsofts-tay-experiment-ad577015181f]
  72. Maguire, M. (2009), 'The Birth of Biometric security', Anthropology Today, Vol. 25, No. 1, pp. 9-14. https://doi.org/10.1111/j.1467-8322.2009.00654.x
  73. Martin, B. and Richards, E. (1995), "Scientific Knowledge, Controversy, and Public Decision Making", In Jasanoff, S., Markle, G. E., Petersen, J. C., and Pinch, T. eds., Handbook of Science and Technology Studies. The Second Edition, pp. 506-526, Cambridge, MA: MIT Press.
  74. Michael, M., & Lupton, D. (2016), 'Toward a manifesto for the 'public understanding of big data'.' Public Understanding of Science, Vol. 25, No. 1, pp. 104-116. https://doi.org/10.1177/0963662515609005
  75. Mohler, G. O., Short, M. B., Brantingham, P. J., Schoenberg, F. P., and Tita. G. E. (2011), 'Self-exciting Point Process Modeling of Crime', Journal of the American Statistical Association, Vol. 106. (Issue 493), pp. 100-108. https://doi.org/10.1198/jasa.2011.ap09546
  76. Muller, B. J. (2008), 'Travellers, Borders, Dangers: Locating the Political at the Biometric Border', in M. B. Salter Ed., Politics at the Airport, University of Minnesota Press, pp. 127-143.
  77. Murphy, E. and Maguire, M. (2015), 'Speed, Time and Security: Anthropological Perspectives on Automated Border Control', Etnofoor, Vol. 27, No. 2, pp. 157-177.
  78. Nelkin, D. (1995), "Science Controversies: The Dynamics of Public Disputes in the United States", In Jasanoff, S., Markle, G. E., Petersen, J. C., and Pinch, T. eds., Handbook of Science and Technology Studies. The Second Edition, pp. 444-456, Cambridge, MA: MIT Press.
  79. Noble, S. U. (2018), Algorithms of Oppression: How search engines reinforce racism. New York: NYU Press.
  80. Perry, W. L., McInnis, B., Price, C. C., Smith, S. C., and Hollywood, J. S. (2013), Predictive policing: The Role of Crime Forecasting in Law Enforcement Operations, Rand Corporation.
  81. Poirier, L., Hidalgo, N., & Goldman, E. (2018), 'Data Design Challenges and Opportunities for NYC Community Boards.' BetaNYC. [https://beta.nyc/publications/data-design-challenges-and-opportunities-for-nyc-community-boards/]
  82. Potzsch, H. (2015), 'The emergence of Border: Bordering Bodies, Networks, and Machines', Environment and Planning D: Society and Space, Vol. 33, No. 1, pp. 101-118. https://doi.org/10.1068/d14050p
  83. Salter M. B. (2006), 'The Global Visa Regime and the Political Technologies of the International Self: Borders, Bodies, Biopolitics', Alternatives: Global, Local, Political, Vol. 31, pp. 167-189. https://doi.org/10.1177/030437540603100203
  84. Sandvig, C., Hamilton, K., Karahalios, K., & Langbort, C. (2014), 'Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms', In Data and Discrimination: Converting Critical Concerns into Productive: A preconference at the 64th Annual Meeting of the International Communication Association. Seattle, WA, 2014.
  85. Schlehahn, E., Aichroth, P., Mann, S., Schreiner, R., Lang, U., Shepherd, I. D. H., and Wong, B. L. W. (2015), 'Benefits and Pitfalls of Predictive Policing', Proceedings of 2015 European Intelligence and Security Informatics Conference (EISIC 2015). [https://ieeexplore. ieee.org/document/7379738]
  86. SearchEnterpriseAI (2015), 'algorithmic transparency', by Matthew Haughn. https://searchenterpriseai.techtarget.com/definition/algorithmic-transparency
  87. SearchEnterpriseAI (2017), 'algorithmic accountability', by Matthew Haughn. https://searchenterpriseai.techtarget.com/definition/algorithmic-accountability
  88. Selbst, A. D. (2018), 'Disparate Impact in Big Data Policing', Georgia Law Review, Vol. 52, pp. 109-195.
  89. Simonite, T. (2018.11.01), 'When It Comes to Gorillas, Google Photos Rem ains Blind', Wired. [https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind]
  90. Sparke, M. B. (2006), 'A Neoliberal Nexus: Economy, Security and the Biopolitics of Citizenship on the Border', Political Geography, Vol. 25, No. 2, pp. 151-180. https://doi.org/10.1016/j.polgeo.2005.10.002
  91. Toole, J. L., Eagle, N., and Plotkin, J. B. (2011), 'Spatiotemporal correlations in Criminal Oense Records.' ACM Transactions on Intelligent Systems and Technology, Vol. 2, No. 4, pp. 1-38.
  92. Rubin, J. (2010.08.21.), 'Stopping Crime Before It Starts', Los Angeles Times. [http://articles.latimes.com/2010/aug/21/local/la-me-predictcrime-20100427-1]
  93. Ulbricht, L. (2018), 'When Big Data Meet Securitization. Algorithmic Regulation with Passenger Name Records', European Journal for Security Research, Vol. 3, No.2, pp. 1-23.
  94. Valverde, M. and Mopas, M. (2004), 'Insecurity and the Dream of Targeted Governance', in W. Larner and W. Walters eds., Global Governmentality: Governing International Spaces, London: Routledge. pp. 245-62.
  95. Vaughan-Williams, N. (2010), 'The UK Border Security Continuum: Virtual Biopolitics and the Simulation of the Sovereign Ban', Environment and Planning D: Society and Space, Vol. 28, pp. 1071-1083. https://doi.org/10.1068/d13908
  96. Walters, W. (2011), 'Rezoning the global: technological zones, technological work and the(un-)making of biometric borders', in V. Squire Ed., The Contested Politics of Mobility: Borderzones and Irregularity, Routledge, pp. 51-73,
  97. Weisburd, D. (2008), 'Place-based policing', Ideas in American policing, Vol. 9 (January 2008), pp. 1-15.
  98. Wilke, C. (2017), "Seeing and Unmaking Civilians in Afghanistan: Visual Technologies and Contested Professional Visions", Science, Technology, & Human Values, Vol. 42, No. 6, pp. 1031-1060. https://doi.org/10.1177/0162243917703463
  99. Williams, B. A., Brooks, C. F., and Shmargad, Y. (2018), 'How Algorithms Discriminate Based on Data They Lack', Journal of Information Policy, Vol. 8, No. 1, pp. 78-115. https://doi.org/10.5325/jinfopoli.8.2018.0078
  100. Wilson, D. and Weber, L. (2008), 'Surveillance, Risk and Preemption on the Australian Border', Surveillance & Society, Vol. 5, No. 2, pp. 124-141.
  101. Wilson, M. (2017), 'Algorithms(and the) Everyday', Information, Communication & Society, Vol. 20, No, 1, pp. 137-150. https://doi.org/10.1080/1369118X.2016.1200645
  102. Wirth, N. (1975), Algorithms + Data Structures = Programs. Englewood Cliffs, Prentice-Hall.