Please note: We are currently experiencing some performance issues across the site, and some pages may be slow to load. We are working on restoring normal service soon. Importing new articles from Word documents is also currently unavailable. We apologize for any inconvenience.

Although the existing machine learning-based intrusion detection systems in the Internet of Things (IoT) usually perform well in static environments, they struggle to preserve their performance over time, in dynamic environments. Yet, the IoT is a highly dynamic and heterogeneous environment, leading to what is known as data drift and concept drift. Data drift is a phenomenon which embodies the change that happens in the relationships among the independent features, which is mainly due to changes in the data quality over time. Concept drift is a phenomenon which depicts the change in the relationships between input and output data in the machine learning model over time. To detect data and concept drifts, we first propose a drift detection technique that capitalizes on the Principal Component Analysis (PCA) method to study the change in the variance of the features across the intrusion detection data streams. We also discuss an online outlier detection technique that identifies the outliers that diverge both from historical and temporally close data points. To counter these drifts, we discuss an online deep neural network that dynamically adjusts the sizes of the hidden layers based on the Hedge weighting mechanism, thus enabling the model to steadily learn and adapt as new intrusion data come. Experiments conducted on an IoT based intrusion detection dataset suggest that our solution stabilizes the performance of the intrusion detection on both the training and testing data compared to the static deep neural network model, which is widely used for intrusion detection.