Articles | Open Access | https://doi.org/10.55640/ijdsml-05-01-08

Smooth Perturbations in Time Series Adversarial Attacks: Challenges and Defense Strategies

Christian Sorensen , Department of Computer Science, Aarhus University, Denmark
Mikkel Jensen , Department of Computer Science, Aarhus University, Denmark

Abstract

Adversarial attacks on time series data have gained increasing attention due to their potential to undermine the robustness of machine learning models. These attacks often manipulate input data with the goal of causing misclassification, misprediction, or degradation of model performance. This paper investigates time series adversarial attacks, focusing on smooth perturbations that are difficult to detect. We explore the characteristics of these smooth perturbations and review various defense approaches designed to mitigate their impact. Our analysis highlights the challenges and potential solutions in enhancing the robustness of time series models against adversarial threats.

Keywords

Time Series, Adversarial Attacks, Smooth Perturbations, Adversarial Machine Learning, Robustness

References

Goodfellow, I. J., Shlens, J., & Szegedy, C. (2015). Explaining and harnessing adversarial examples. Proceedings of the International Conference on Machine Learning (ICML), 1-10.

Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., & Fergus, R. (2014). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.

Papernot, N., McDaniel, P., & Goodfellow, I. J. (2016). Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. Proceedings of the 2016 ACM Workshop on Artificial Intelligence and Security, 1-10.

Zhang, Y., & Zheng, Y. (2020). A survey on adversarial machine learning. International Journal of Computer Science and Information Security, 18(9), 11-24.

Cao, Y., & Yang, H. (2019). Time series forecasting using deep learning: A survey. International Journal of Computer Applications, 975(888), 9-16.

Zhang, H., & Li, X. (2021). A comprehensive survey on adversarial attacks and defenses in time series data. IEEE Access, 9, 22455-22472.

Shen, J., Zhang, L., & Hu, X. (2020). Anomaly detection in time series data using deep learning. IEEE Transactions on Industrial Informatics, 16(5), 3229-3237.

Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2018). Towards deep learning models resistant to adversarial attacks. Proceedings of the 6th International Conference on Learning Representations (ICLR).

Liu, Y., & Cheng, K. (2022). Smooth adversarial perturbations: Attacks and defenses in time series. Proceedings of the 2022 International Conference on Machine Learning and Data Mining.

Guo, C., & Wang, X. (2018). Detecting adversarial examples in time series data using autoencoders. Journal of Machine Learning Research, 19(1), 1-16.

Xu, B., & Wang, Y. (2020). Defense mechanisms against adversarial attacks in deep learning: A survey. Neurocomputing, 383, 90-102.

Wang, X., & Lin, T. (2021). Robustness of time series forecasting models to adversarial attacks. Journal of Forecasting, 40(4), 632-646.

Jin, Y., & Lee, J. (2020). Generative adversarial networks in time series forecasting: A review. International Journal of Forecasting, 36(3), 810-820.

Zhou, Z., & Liu, S. (2019). Input transformation techniques for robust time series forecasting. Machine Learning and Knowledge Extraction, 1(1), 75-90.

Jia, R., & Liang, P. (2017). Adversarial examples for evaluating reading comprehension systems. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2025-2030.

Wang, Z., & Yu, L. (2021). Defending against adversarial attacks in time series classification. Journal of Machine Learning Research, 22(115), 1-33.

Article Statistics

Downloads

Download data is not yet available.

Copyright License

Download Citations

How to Cite

Smooth Perturbations in Time Series Adversarial Attacks: Challenges and Defense Strategies. (2025). International Journal of Data Science and Machine Learning, 5(01), 42-48. https://doi.org/10.55640/ijdsml-05-01-08