CC BY-NC-ND 4.0 · Yearb Med Inform 2023; 32(01): 269-281
DOI: 10.1055/s-0043-1768731
Section 12: Sensor, Signal and Imaging Informatics
Survey

Security and Privacy in Machine Learning for Health Systems: Strategies and Challenges

Erikson J. de Aguiar
Institute of Mathematics and Computer Science, University of São Paulo, Brazil
,
Caetano Traina Jr.
Institute of Mathematics and Computer Science, University of São Paulo, Brazil
,
Agma J. M. Traina
Institute of Mathematics and Computer Science, University of São Paulo, Brazil
› Author Affiliations

Summary

Objectives: Machine learning (ML) is a powerful asset to support physicians in decision-making procedures, providing timely answers. However, ML for health systems can suffer from security attacks and privacy violations. This paper investigates studies of security and privacy in ML for health.

Methods: We examine attacks, defenses, and privacy-preserving strategies, discussing their challenges. We conducted the following research protocol: starting a manual search, defining the search string, removing duplicated papers, filtering papers by title and abstract, then their full texts, and analyzing their contributions, including strategies and challenges. Finally, we collected and discussed 40 papers on attacks, defense, and privacy.

Results: Our findings identified the most employed strategies for each domain. We found trends in attacks, including universal adversarial perturbation (UAPs), generative adversarial network (GAN)-based attacks, and DeepFakes to generate malicious examples. Trends in defense are adversarial training, GAN-based strategies, and out-of-distribution (OOD) to identify and mitigate adversarial examples (AE). We found privacy-preserving strategies such as federated learning (FL), differential privacy, and combinations of strategies to enhance the FL. Challenges in privacy comprehend the development of attacks that bypass fine-tuning, defenses to calibrate models to improve their robustness, and privacy methods to enhance the FL strategy.

Conclusions: In conclusion, it is critical to explore security and privacy in ML for health, because it has grown risks and open vulnerabilities. Our study presents strategies and challenges to guide research to investigate issues about security and privacy in ML applied to health systems.



Publication History

Article published online:
26 December 2023

© 2023. IMIA and Thieme. This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/)

Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany

 
  • References

  • 1 Aiello M, Cavaliere C, D'Albore A, Salvatore M. The challenges of diagnostic imaging in the era of big data. J Clin Med 2019 Mar 6;8(3):316. doi: 10.3390/jcm8030316.
  • 2 Pitropakis N, Panaousis E, Giannetsos T, Anastasiadis E, Loukas G. A taxonomy and survey of attacks against machine learning. Comput Sci Rev 2019;34:100199. doi: 10.1016/j.cosrev.2019.100199
  • 3 Machado GR, Silva E, Goldschmidt RR. Adversarial machine learning in image classification: A survey toward the defender's perspective. ACM Computing Surveys (CSUR) 2021 Nov 23;55(1):1-38. doi: 10.1145/3485133.
  • 4 Biggio B, Roli F. Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security (CCS ‘18). New York, NY, USA: Association for Computing Machinery; 2018. p. 2154–6. doi: 10.1145/3243734.3264418.
  • 5 Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. 2017 Jun 19.
  • 6 Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. 2014 Dec 20.
  • 7 Su J, Vargas DV, Sakurai K. J. One Pixel Attack for Fooling Deep Neural Networks. IEEE Trans Evol Comput 2019;23(5):828-41. doi: 10.1109/TEVC.2019.2890858
  • 8 Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A. The Limitations of Deep Learning in Adversarial Settings. 2016 IEEE European Symposium on Security and Privacy (EuroS&P), Saarbruecken, Germany; 2016. p. 372-87. doi: 10.1109/EuroSP.2016.36.
  • 9 Moosavi-Dezfooli SM, Fawzi A, Frossard P. Deepfool: a simple and accurate method to fool deep neural networks. Proceedings of the IEEE conference on computer vision and pattern recognition 2016. p. 2574-82.
  • 10 Carlini N, Wagner D. Towards evaluating the robustness of neural networks. Towards Evaluating the Robustness of Neural Networks. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA; 2017. p. 39-57. doi: 10.1109/SP.2017.49.
  • 11 Moosavi-Dezfooli SM, Fawzi A, Fawzi O, Frossard P. Universal adversarial perturbations. Proceedings of the IEEE conference on computer vision and pattern recognition; 2017. p. 1765-73.
  • 12 Xiao Q, Chen Y, Shen C, Chen Y, Li K. Seeing is not believing: Camouflage attacks on image scaling algorithms. 28th USENIX Security Symposium (USENIX Security 19); 2019. p. 443-60.
  • 13 Quiring E, Klein D, Arp D, Johns M, Rieck K. Adversarial preprocessing: Understanding and preventing Image-Scaling attacks in machine learning. 29th USENIX Security Symposium (USENIX Security 20); 2020. p. 1363-80.
  • 14 Xu W, Evans D, Qi Y. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155. 2017 Apr 4.
  • 15 Papernot N, McDaniel P, Wu X, Jha S, Swami A. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. 2016 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA; 2016. p. 582-97. doi: 10.1109/SP.2016.41.
  • 16 Meng D, Chen H. Magnet: a two-pronged defense against adversarial examples. Proceedings of the 2017 ACM SIGSAC conference on computer and communications security; 2017 Oct 30. p. 135-47. doi: 10.1145/3133956.3134057.
  • 17 Baracaldo N, Oprea A. Machine Learning Security and Privacy. IEEE Secur Priv 2022;20(5):11-3. doi: 10.1109/MSEC.2022.3188190.
  • 18 Strobel M, Shokri R. Data Privacy and Trustworthy Machine Learning. IEEE Secur Priv 2022;20(5):44-9. doi: 10.1109/MSEC.2022.3178187.
  • 19 Liu B, Ding M, Shaham S, Rahayu W, Farokhi F, Lin Z. When machine learning meets privacy: A survey and outlook. ACM Computing Surveys (CSUR) 2021 Mar 5;54(2):1-36. doi: 10.1145/3436755.
  • 20 Narayanan A, Shmatikov V. Robust de-anonymization of large sparse datasets. 2008 IEEE Symposium on Security and Privacy (SP 2008), Oakland, CA, USA; 2008. p. 111-25. doi: 10.1109/SP.2008.33.
  • 21 Krumm J. Inference attacks on location tracks. Pervasive Computing. Pervasive 2007. Lecture Notes in Computer Science, vol 4480. Berlin, Heidelberg: Springer. P. 127-43. doi: 10.1007/978-3-540-72037-9_8.
  • 22 Henriksen-Bulmer J, Jeary S. Re-identification attacks—A systematic literature review. Int J Inf Manag 2016 Dec 1;36(6):1184-92. doi: 10.1016/j.ijinfomgt.2016.08.002.
  • 23 Atenfiese G, Mancini LV, Spognardi A, Villani A, Vitali D, Felici G. Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers. International Journal of Security and Networks 2015;10(3):137-50. doi: 10.1504/IJSN.2015.071829.
  • 24 Shokri R, Stronati M, Song C, Shmatikov V. Membership inference attacks against machine learning models. 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA; 2017. p. 3-18. doi: 10.1109/SP.2017.41.
  • 25 Song C, Ristenpart T, Shmatikov V. Machine learning models that remember too much. Proceedings of the 2017 ACM SIGSAC Conference on computer and communications security: 2017 Oct 30. p. 587-601. doi: 10.1145/3133956.3134077.
  • 26 Tramèr F, Zhang F, Juels A, Reiter MK, Ristenpart T. Stealing machine learning models via prediction APIs. 25th USENIX security symposium (USENIX Security 16); 2016. p. 601-18.
  • 27 Thambawita V, Isaksen JL, Hicks SA, Ghouse J, Ahlberg G, Linneberg A, et al. DeepFake electrocardiograms using generative adversarial networks are the beginning of the end for privacy issues in medicine. Sci Rep 2021 Nov 9;11(1):21896. doi: 10.1038/s41598-021-01295-2.
  • 28 Dwork C, Roth A. The algorithmic foundations of differential privacy. Foundations and Trends® in Theoretical Computer Science 2014 Aug 10;9(3–4):211-407. doi: 10.1561/0400000042.
  • 29 Yi X, Paulet R, Bertino E. Homomorphic Encryption. In: Homomorphic Encryption and Applications. SpringerBriefs in Computer Science. Cham: Springer; 2014. doi: 10.1007/978-3-319-12229-8_2.
  • 30 Cramer R, Damgård I. Multiparty computation, an introduction. In: Contemporary Cryptology 2005. Advanced Courses in Mathematics - CRM Barcelona. Basel: Birkhäuser; 2005. p. 41-87. doi: 10.1007/3-7643-7394-6_2.
  • 31 Konečný J, McMahan HB, Yu FX, Richtárik P, Suresh AT, Bacon D. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492. 2016 Oct 18.
  • 32 Kitchenham B, Brereton P. A systematic review of systematic review process research in software engineering. Inf Softw Technol 2013 Dec 1;55(12):2049-75. doi: 10.1016/j.infsof.2013.07.010.
  • 33 Brereton P, Kitchenham BA, Budgen D, Turner M, Khalil M. Lessons from applying the systematic literature review process within the software engineering domain. J Syst Softw 2007 Apr 1;80(4):571-83. doi: 10.1016/j.jss.2006.07.009.
  • 34 Mirsky Y, Mahler T, Shelef I, Elovici Y. {CT-GAN}: Malicious Tampering of 3D Medical Imagery using Deep Learning. 28th USENIX Security Symposium (USENIX Security 19); 2019. p. 461-78.
  • 35 Finlayson SG, Bowers JD, Ito J, Zittrain JL, Beam AL, Kohane IS. Adversarial attacks on medical machine learning. Science 2019 Mar 22;363(6433):1287-9. doi: 10.1126/science.aaw4399.
  • 36 Mangaokar N, Pu J, Bhattacharya P, Reddy CK, Viswanath B. Jekyll: Attacking medical image diagnostics using deep generative models. 2020 IEEE European Symposium on Security and Privacy (EuroS&P), Genoa, Italy; 2020. p. 139-57. doi: 10.1109/EuroSP48549.2020.00017.
  • 37 Allyn J, Allou N, Vidal C, Renou A, Ferdynus C. Adversarial attack on deep learning-based dermatoscopic image recognition systems: Risk of misdiagnosis due to undetectable image perturbations. Medicine (Baltimore) 2020 Dec 11;99(50):e23568. doi: 10.1097/MD.0000000000023568.
  • 38 Shao M, Zhang G, Zuo W, Meng D. Target attack on biomedical image segmentation model based on multi-scale gradients. Information Sciences 2021 Apr 1;554:33-46. doi: 10.1016/j.ins.2020.12.013.
  • 39 Ma X, Niu Y, Gu L, Wang Y, Zhao Y, Bailey J, et al. Understanding adversarial attacks on deep learning based medical image analysis systems. Pattern Recognition 2021 Feb 1;110:107332. doi: 10.1016/j.patcog.2020.107332.
  • 40 Hirano H, Minagi A, Takemoto K. Universal adversarial attacks on deep neural networks for medical image classification. BMC Med Imaging 2021 Jan 7;21(1):9. doi: 10.1186/s12880-020-00530-y.
  • 41 Minagi A, Hirano H, Takemoto K. Natural Images Allow Universal Adversarial Attacks on Medical Image Classification Using Deep Neural Networks with Transfer Learning. J Imaging 2022 Feb 4;8(2):38. doi: 10.3390/jimaging8020038.
  • 42 Aguiar EJ, Marcomini KD, Quirino FA, Gutierrez MA, Traina Jr C, Traina AJ. Evaluation of the impact of physical adversarial attacks on deep learning models for classifying covid cases. Proc SPIE 12033, Medical Imaging 2022: Computer-Aided Diagnosis, 120332P (4 April 2022); doi: 10.1117/12.2611199.
  • 43 Kong F, Liu F, Xu K, Shi X. Why does batch normalization induce the model vulnerability on adversarial images? World Wide Web 2023;26:1073–91. doi: 10.1007/s11280-022-01066-7.
  • 44 Wei C, Sun R, Li P, Wei J. Analysis of the No-sign Adversarial Attack on the COVID Chest X-ray Classification. 2022 International Conference on Image Processing and Media Computing (ICIPMC), Xi'an, China; 2022. p. 73-9. doi: 10.1109/ICIPMC55686.2022.00022.
  • 45 Apostolidis KD, Papakostas GA. Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning. J Imaging 2022 May 30;8(6):155. doi: 10.3390/jimaging8060155.
  • 46 Cui X, Chang S, Li C, Kong B, Tian L, Wang H, et al. DEAttack: A differential evolution based attack method for the robustness evaluation of medical image segmentation. Neurocomputing 2021 Nov 20;465:38-52. doi: 10.1016/j.neucom.2021.08.118.
  • 47 Ozbulak U, Van Messem A, Neve WD. Impact of adversarial examples on deep learning models for biomedical image segmentation. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. Lecture Notes in Computer Science 2019;11765:300-8. doi: 10.1007/978-3-030-32245-8_34.
  • 48 Chen L, Bentley P, Mori K, Misawa K, Fujiwara M, Rueckert D. Intelligent image synthesis to attack a segmentation CNN using adversarial learning. Simulation and Synthesis in Medical Imaging. SASHIMI 2019. Lecture Notes in Computer Science 2019;11827: 90-9. doi: 10.1007/978-3-030-32778-1_10.
  • 49 Asgari Taghanaki S, Das A, Hamarneh G. Vulnerability analysis of chest X-ray image classification against adversarial attacks. In: Understanding and interpreting machine learning in medical image computing applications 2018 Sep 20 MLCN DLF IMIMIC 2018. Lecture Notes in Computer Science 2018;11038. doi: 10.1007/978-3-030-02628-8_10.
  • 50 Zhou Q, Zuley M, Guo Y, Yang L, Nair B, Vargo A, et al. A machine and human reader study on AI diagnosis model safety under attacks of adversarial images. Nat Commun 2021 Dec 14;12(1):7281. doi: 10.1038/s41467-021-27577-x.
  • 51 Park H, Bayat A, Sabokrou M, Kirschke JS, Menze BH. Robustification of Segmentation Models Against Adversarial Perturbations in Medical Imaging. In: International Workshop on Predictive Intelligence in Medicine, 2020 Oct 8. p. 46-57. Cham: Springer; 2020 .
  • 52 Liu Q, Jiang H, Liu T, Liu Z, Li S, Wen W, et al. Defending deep learning-based biomedical image segmentation from adversarial attacks: a low-cost frequency refinement approach. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, 2020 Oct 4. Cham: Springer; 2020: p. 342-51.
  • 53 Liu S, Setio AA, Ghesu FC, Gibson E, Grbic S, Georgescu B, et al. No surprises: Training robust lung nodule detection for low-dose CT scans by augmenting with adversarial attacks. IEEE Trans Med Imaging 2021 Jan;40(1):335-45. doi: 10.1109/TMI.2020.3026261.
  • 54 Qi X, Hu J, Yi Z. Missed diagnoses detection by adversarial learning. Knowl Based Syst 2021 May 23;220:106903. doi: 10.1016/j.knosys.2021.106903.
  • 55 Yang Y, Shih FY, Roshan U. Defense Against Adversarial Attacks Based on Stochastic Descent Sign Activation Networks on Medical Images. Intern J Pattern Recognit Artif Intell 2022 Mar 15;36(03):2254005. doi: 10.1142/S0218001422540052
  • 56 Shi X, Peng Y, Chen Q, Keenan T, Thavikulwat AT, Lee S, et al. Robust convolutional neural networks against adversarial attacks on medical images. Pattern Recognition 2022 Dec 1;132:108923. doi: 10.1016/j.patcog.2022.108923.
  • 57 Uwimana A, Senanayake R. Out of distribution detection and adversarial attacks on deep neural networks for robust medical image analysis. arXiv preprint arXiv:2107.04882. 2021 Jul 10.
  • 58 Paul R, Schabath M, Gillies R, Hall L, Goldgof D. Mitigating adversarial attacks on medical image understanding systems. 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA; 2020. p. 1517-21. doi: 10.1109/ISBI45749.2020.9098740.
  • 59 Li X, Zhu D. Robust detection of adversarial attacks on medical images. 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA; 2020. p. 1154-8. doi: 10.1109/ISBI45749.2020.9098628.
  • 60 Xu M, Zhang T, Zhang D. Medrdf: a robust and retrain-less diagnostic framework for medical pretrained models against adversarial attack. IEEE Trans Med Imaging 2022 Aug;41(8):2130-43. doi: 10.1109/TMI.2022.3156268.
  • 61 Maliamanis TV, Apostolidis KD, Papakostas GA. How Resilient Are Deep Learning Models in Medical Image Analysis? The Case of the Moment-Based Adversarial Attack (Mb-AdA). Biomedicines 2022 Oct 12;10(10):2545. doi: 10.3390/biomedicines10102545.
  • 62 Rodriguez D, Nayak T, Chen Y, Krishnan R, Huang Y. On the role of deep learning model complexity in adversarial robustness for medical images. BMC Med Inform Decis Mak 2022 Jun 20;22(Suppl 2):160. doi: 10.1186/s12911-022-01891-w.
  • 63 Morshuis JN, Gatidis S, Hein M, Baumgartner CF. Adversarial Robustness of MR Image Reconstruction Under Realistic Perturbations. In: Haq N, Johnson P, Maier A, Qin C, Würfl T, Yoo , editors. Machine Learning for Medical Image Reconstruction. MLMIR 2022. Lecture Notes in Computer Science, vol 13587. Cham: Springer; 2022. doi: 0.1007/978-3-031-17247-2_3.
  • 64 Gupta D, Pal B. Vulnerability Analysis and Robust Training with Additive Noise for FGSM Attack on Transfer Learning-Based Brain Tumor Detection from MRI. In: Arefin MS, Kaiser MS, Bandyopadhyay A, Ahad MAR, Ray K, editors. Proceedings of the International Conference on Big Data, IoT, and Machine Learning. Lecture Notes on Data Engineering and Communications Technologies, vol 95. Singapore: Springer; 2022. doi: 10.1007/978-981-16-6636-0_9.
  • 65 Silva JM, Pinho E, Monteiro E, Silva JF, Costa C. Controlled searching in reversibly de-identified medical imaging archives. J Biomed Inform 2018 Jan;77:81-90. doi: 10.1016/j.jbi.2017.12.002.
  • 66 Feki I, Ammar S, Kessentini Y, Muhammad K. Federated learning for COVID-19 screening from Chest X-ray images. Applied Soft Computing 2021 Jul 1;106:107330. doi: https://doi.org/10.1016/j.asoc.2021.107330.
  • 67 Kaissis G, Ziller A, Passerat-Palmbach J, Ryffel T, Usynin D, Trask A, et al. End-to-end privacy preserving deep learning on multi-institutional medical imaging. Nat Mach Intell 2021 Jun;3(6):473-84. 10.1038/s42256-021-00337-8.
  • 68 Ziller A, Usynin D, Braren R, Makowski M, Rueckert D, Kaissis G. Medical imaging deep learning with differential privacy. Sci Rep 2021 Jun 29;11(1):13524. doi: 10.1038/s41598-021-93030-0.
  • 69 Kumar A, Purohit V, Bharti V, Singh R, Singh SK. MediSecFed: Private and Secure Medical Image Classification in the Presence of Malicious Clients. IEEE Trans Industr Inform 2022;18(8):5648-57. doi: 10.1109/TII.2021.3138919.
  • 70 Sun Z, Wang Y, Shu M, Liu R, Zhao H. Differential Privacy for Data and Model Publishing of Medical Data. IEEE Access 2019;7:152103-14. doi: 10.1109/ACCESS.2019.2947295.
  • 71 Venugopal R, Shafqat N, Venugopal I, Tillbury BM, Stafford HD, Bourazeri A. Privacy preserving Generative Adversarial Networks to model Electronic Health Records. Neural Networks 2022 Sep 1;153:339-48. doi: https://doi.org/10.1016/j.neunet.2022.06.022.
  • 72 Wibawa F, Catak FO, Kuzlu M, Sarp S, Cali U. Homomorphic Encryption and Federated Learning based Privacy-Preserving CNN Training: COVID-19 Detection Use-Case. Proceedings of the 2022 European Interdisciplinary Cybersecurity Conference 2022 Jun 15. p. 85-90. doi: 10.1145/3528580.3532845.
  • 73 Croce F, Hein M. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. Proceedings of the 37th International Conference on Machine Learning, PMLR 2020;119:2206-16.