Chintan Maniyar

and 1 more

Deep learning techniques are being increasingly used in earth science applications - from climate change modelling to feature extraction from remote sensing imagery, given their advantage of increased contextual and hierarchical feature representation. However, deep learning comes at an expense of extensive computational resources and long training time to achieve benchmark results. This study suggests time-optimized deep learning techniques for training deep convolutional networks for one of the most sought after feature extraction subsets – building extraction from satellite/aerial imagery. Building extraction is one of the most important tasks in the dynamic pipeline of urban applications such as urban planning and management, disaster management, urban mapping etc. among other geospatial applications. Automatically extracting buildings from remotely sensed imagery has always been a challenging task, given the spectral homogeneity of buildings with the non-building features as well as the complex structural diversity within the image. With the availability of high resolution open-source satellite and UAV data, deep learning techniques have greatly improved building extraction. However, training on such high resolution data requires the networks to be significantly deeper, resulting in long model training and inference times. This study proposes a combination of two time efficient methods to train a Dynamic Res-U-Net for building extraction in less time without decreasing the training parameters: 1) Using Cyclical Learning and SuperConvergence concepts by dynamically changing the learning rate while training the network to achieve very high accuracy in very less time and 2) Using a specific order to train the layers of the network(s) to specially have the last layers of the networks perform better, leading to an overall improved network performance in lesser time. Building extraction results are gauged using the metrics of Accuracy, Dice Score and Intersection over Union (IoU) and F1-Score. The metrics comparison of training the Res-U-Net in the conventional way vs the proposed techniques shows an evident optimisation in terms of time. Better results are achieved in lesser training epochs using the proposed time-optimised training techniques.

Chintan Maniyar

and 1 more

Automatically extracting buildings from remotely sensed imagery has always been a challenging task, given the spectral homogeneity of buildings with the non-building features as well as the complex structural diversity within the image. Traditional machine learning (ML) based methods deeply rely on a huge number of samples and are best suited for medium resolution images. Unmanned aerial vehicle (UAV) imagery offers the distinct advantage of very high spatial resolution, which is helpful in improving building extraction by characterizing patterns and structures. However, with increased finer details, the number of images also increase many fold in a UAV dataset, which require robust processing algorithms. Deep learning algorithms, specifically Fully Convolutional Networks (FCNs) have greatly improved the results of building extraction from such high resolution remotely sensed imagery, as compared to traditional methods. This study proposes a deep learning based segmentation approach to extract buildings by transferring the learning of a deep Residual Network (ResNet) to the segmentation based FCN U-Net. This combined dense architecture of ResNet and U-Net (Res-U-Net) is trained and tested for building extraction on the open source Inria Aerial Image Labelling (IAIL) dataset. This dataset contains 360 orthorectified images with a tile size of 1500m2 each, at 30cm spatial resolution with red, green and blue bands; while covering total area of 805km2 in select US and Austrian cities. Quantitative assessments show that the proposed methodology outperforms the current deep learning based building extraction methods. When compared with a singular U-Net model for building extraction for the IAIL dataset, the proposed Res-U-Net model improves the overall accuracy from 92.85% to 96.5%, the mean F1-score from 0.83 to 0.88 and the mean IoU metric from 0.71 to 0.80. Results show that such a combination of two deep learning architectures greatly improves the building extraction accuracy as compared to a singular architecture.

Chintan Maniyar

and 2 more

Cyanobacterial Harmful Algal Blooms (CyanoHABs) are progressively becoming a major water quality and public health hazard worldwide. Untreated CyanoHABs can severely affect human health due to their toxin producing ability, causing physiological and neurological disorders such as non-alcoholic liver disease, dementia to name a few. Transfer of these cyanotoxins via food-chain only accelerates public health hazards. CyanoHABs can potentially also lead to a decline in aquatic and animal life, hampering recreational activities at waterbodies and ultimately affecting the country’s economy gravely. CyanoHABs require nutrient rich warm aquatic environments to bloom and their proliferation in increasingly warmer areas of the world can be an indirect indicator of global climate change. Many lakes in the United States have been experiencing such CyanoHABs in the summers, which only grow severe every coming year, and this is consistently leading to increased public health implications. A recent study (September, 2021) by the Centre for Disease Control quantified hospital visits with the trend of such CyanoHABs to indeed observe a strong correlation between the two. This necessitates a need for a user-friendly and accessible infrastructure to monitor inland and coastal waterbodies throughout the U.S for such blooms. We present a remote sensing-based approach wrapped in a lucid web-app, “CyanoTRACKER”, which can help detect CyanoHABs on a global level and act as an early warning system, potentially preventing/lessening public health implications. CyanoHABs are dominated by the Phycocyanin pigment, which absorbs sunlight strongly around 620 nm wavelength. Owing to this specific absorption characteristic and the availability of a satellite band at exactly 620 nm, we use the opensource Sentinel-3 OLCI satellite data to detect the presence of CyanoHABs. CyanoTracker is a user-friendly Google Earth Engine dashboard, which is easily accessible via only a browser and an internet connection and allows for a variety of near-daily analysis options such as: a) select any location throughout the world and view satellite image based on date-range of choice, b) click on any pixel in the satellite image and detect presence/absence of cyanobacteria, c) visualize the spatial spread as well as the temporal phenology of an ongoing bloom or a potential incoming bloom. This dashboard is easily accessible to water-managers and in fact, anyone who wishes to use it with minimal training and can effectively serve as an early warning system to CyanoHAB induced disease outbreaks.