Acknowledgement
This work was supported by Dongseo University, "Dongseo Cluster Project" Research Fund of 2023 (DSU-20230004).
References
- Kevin Eykholt et al., "Robust physical-world attacks in deep learning Visual Classification," IEEE CS(conference on CVPR 2018), pp.1625-1634. DOI: 10.1109/CVPR.2018.00175.
- Kevin Eykholt , Ivan Evtimov , Earlence Fernandes , Bo Li, "Physical adversarial examples for object detectors." 12th USENIX Workshop on Offensive Technologies (WOOT18), 2018. https://doi.org/10.48550/arXiv.1807.07769
- Tom B. Brown, Dandelion Mane , Aurko Roy, Martin Abadi , Justin Gilmer, "Adversarial Patch," Google, Dec 2017, NIPS 2017. https://doi.org/10.48550/arXiv.1807.07769
- JUNSIK HWANG, Adversarial Attack, https://jsideas.net/Adversarial_Attack/ , 2020.
- Ian Goodfellow , Joonathan Shlens , and Christian Szegedy , "Explaining and harnessing adversarial examples", ICLR2015. https://doi.org/10.48550/arXiv.1412.6572
- YouTube channel: "This AI-generated Joe Rogan fake has to be heard to be believed" https://www.youtube.com/watch?time_continue=35&v=i7QNUZWS6VE&feature=emb_title
- YouTube Video: "This AI lets you deepfake your voice to speak like Barack Obama" https://www.youtube.com/watch?v=i7QNUZWS6VE
- Seyed -Mohsen Moosavi-Dezfooli , Alhussein Fawzi , Pascal Frossard , " DeepFool : a simple and accurate method to fool deep neural networks", IEEE CVPR2016. https://doi.org/10.48550/arXiv.1511.04599
- Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, Cho-Jui Hsieh, "ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models", ACM AISec2017. https://doi.org/10.48550/arXiv.1708.03999
- Wieland Brendel , Jonas Rauber , and Matthias Bethge , "Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models," ICLR2018. https://doi.org/10.48550/arXiv.1712.04248
- Alexey Kurakin , Ian J. Goodfellow , Samy Bengio , "Adversarial examples in the physical world," ICLR2017. https://doi.org/10.48550/arXiv.1607.02533
- Anish Athalye , Logan Engstrom , Andrew Ilyas , and Kevin Kwok, "Synthesizing Robust Adversarial Examples," ICML 2018. https://doi.org/10.48550/arXiv.1707.07397
- Nicholas Carlini , David Wagner, " MagNet and "Efficient Defenses Against Adversarial Attacks" are Not Robust to Adversarial Examples," arxiv2017. https://doi.org/10.48550/arXiv.1707.06728
- Yash Sharma and Pin-Yu Chen, "Attacking the Madry Defense Model with L1-based Adversarial Examples," ICRL2018. https://doi.org/10.48550/arXiv.1710.10733
- Jonathan Uesato , Brendan O'Donoghue , Aaron van den Oord, Pushmeet Kohli , "Adversarial Risk and the Dangers of Evaluating Against Weak Attacks," ICML2018. https://doi.org/10.48550/arXiv.1802.05666
- Anish Athalye , Nicholas Carlini , and David Wagner, "Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples," ICML2018. https://doi.org/10.48550/arXiv.1802.00420
- Matt Fredrikson et.al, "Model Inversion attacks that Exploit Confidence Information and Basic Countermeasures," ACM (CCS'2015) https://dl.acm.org/doi/10.1145/2810103.2813677
- Nicolas Papernot , Patrick McDaniel, et. al., "The Limitations of Deep Learning in Adversarial Settings", IEEE S&P 2016. https://doi.org/10.48550/arXiv.1511.07528
- Tramer , F., Zhang, F., Juels , A., Reiter, MK, & Ristenpart , T. (2016), "Stealing machine learning models via prediction APIS.," In 25th USENIX Security Symposium (USENIXSecurity 16)(pp. 601-618). https://doi.org/10.48550/arXiv.1609.02943
- Shokri , R., Stronati , M., Song, C., & Shmatikov , V. (2017, May), "Membership inference attacks against machine learning models," In 2017 IEEE Symposium on Security and Privacy (SP) (pp. 3-18). https://doi.org/10.48550/arXiv.1610.05820
- Takemura , T., Yanai , N., & Fujiwara, T. (2020), "Model Extraction Attacks against Recurrent Neural Networks," arXiv preprint arXiv:2002.00123. https://doi.org/10.48550/arXiv.2002.00123
- Atli , B.G., Szyller , S., Juuti , M., Marchal , S., & Asokan , N. (2019), "Extraction of Complex DNN Models: Real Threat or Boogeyman?, " arXiv preprint arXiv:1910.05429 . https://doi.org/10.48550/arXiv.1910.05429
- Papernot , N., McDaniel, P., Goodfellow , I., Jha , S., Celik , ZB, Swami, A., "Practical black-box attacks against machine learning," Proceedings of the 2017 ACM on Asia conference on computer and communications security. pp. 506-519. ACM ( 2017). https://doi.org/10.48550/arXiv.1602.02697
- Orekondy , T., Schiele , B., Fritz, M., "Prediction poisoning: Utility-constrained defenses against model stealing attacks," International Conference on Representation Learning (ICLR) (2020), https://arxiv.org/abs/1906.10908. https://doi.org/10.48550/arXiv.1906.10908
- Lee, T., Edwards, B., Molloy, I., Su, D., "Defending against model stealing attacks using deceptive perturbations, " arXiv preprint arXiv:1806.00054 (2018). https://doi.org/10.48550/arXiv.1806.00054
- Orekondy , T., Schiele , B., Fritz, M., "Knockoff nets: Stealing functionality of black box models," CVPR (2019). https://doi.org/10.48550/arXiv.1812.02766
- http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist.
- Dan Iter , Jade Huang, Mike Jermann , "Generating Adversarial Examples for Speech Recognition", Technical Report, 2017
- Guoming Zhang, Chen Yan, Xiaoyu Ji, Taimin Zhang, Tianchen Zhang, Wenyuan Xu, " DolphinAtack : Inaudible Voice Commands ", ACM Conference on Computer and Communications Security (CCS) 2017. https://dl.acm.org/doi/10.1145/3133956.3134052
- Nirupam Roy, Sheng Shen, Haitham Hassanieh , and Romit Roy Choudhury, " Inaudible Voice Commands: The Long-Range Attack and Defense ", USENIX Conference on NSDI'2018.
- https://www.pcmag.com/news/371757/lasers-can-actually-hack-your-smart-speaker (by Michael Kan November 4, 2019)
- Takeshi sugaware et al., "Light Commands: Laser-Based Audio Injection Attacks on Voice-Controllable Systems," USENIX Security Symposium (Aug. 12-14, 2020)
- Chen Wang, Cong Shi, Yingying Chen, Yan Wang, Nitesh Saxena , " WearID : Wearable-Assisted Low-Effort Authentication to Voice Assistants using Cross-Domain Speech Similarity," CCS'2019. https://doi.org/10.48550/arXiv.2003.09083
- Yunmok Son, Hocheol Shin, Dongkwan Kim, Youngseok Park, Juhwan Noh, Kibum Choi, Jungwoo Choi, and Yongdae Kim, "Rocking drones with intentional sound noise on gyroscopic sensors," 24th USENIX Security Symposium (USENIX Security 15). 2015.
- Man Zhou et al., " PatternListener : Cracking Android Pattern Lock Using Acoustic Signals ", ACM CCS'2018. https://doi.org/10.48550/arXiv.1810.02242
- Peng Cheng et al., " SonarSnoop : Active Acoustic Side-Channel Attacks", International Journal of Information Security (2020) Vol. 19, pp.213-228. https://doi.org/10.48550/arXiv.1808.10250
- Ye Jia , Yu Zhang, Ron J. Weiss, Quan Wang, Jonathan Shen, Fei Ren, Zhifeng Chen, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu, "Transfer Learning from Speaker Verification to Multispeaker Text-To -Speech Synthesis," Advances in Neural Information Processing Systems 31 (2018), 4485-4495.
- Mordechai Guri , " AiR -ViBeR : Exfiltrating Data from Air-Gapped Computers via Covert Surface ViBrAtIoNs ," arxiv.org (2020).
- Mordechai Guri , "POWER - SUPPLaY : Leaking Data from Air-Gapped Systems by Turning the Power-Supplies Into Speakers," arxiv.org, 2020. https://doi.org/10.48550/arXiv.2005.00395
- Mordechai Guri , Yosef Solewicz , Andrey Daidakulov , Yuval Elovici, "MOSQUITO : Covert Ultrasonic Transmissions between Two Air-Gapped Computers using Speaker-to-Speaker Communication," arxiv.org, 2018. https://doi.org/10.48550/arXiv.1803.03422
- Ilia Shumailov Laurent Simon Jeff Yan Ross Anderson , "Hearing your touch: A new acoustic side channel on smartphones," ArXiv.org, 2019. https://doi.org/10.48550/arXiv.1903.11137
- Abe Davis, Michael Rubinstein, Neal Wadhwa , Gautham J. Mysore, Fredo Durand, William T. Freeman , "The Visual Microphone: Passive Recovery of Sound from Video," ACM Transcations on Graphics, Vol.33, No.4 , 2014. https://dl.acm.org/doi/10.1145/2601097.2601119
- Qibin Yan, et al., " SurfingAttack : Interactive Hidden Attack on Voice Assistants Using Ultrasonic Guided Waves," NDSS2020.
- Yulong Cao, Chaowei Xiao, Benjamin Cyr, Yimeng Zhou, Won Park, SaraRampazzi , Qi Alfred Chen, Kevin Fu, and Z. Morley Mao, "Adversarial sensor attack on LiDAR-based perception in autonomous driving," Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019. https://doi.org/10.48550/arXiv.1907.06826
- Kubiak, I., Przybysz , A., & Musial, S. (2020), "Possibilities of Electromagnetic Penetration of Displays of Multifunction Devices," Computers, 9(3), 62. https://doi.org/10.3390/computers9030062