[Incomplete] "Current ML techniques in Landslide Detection & Prediction"

Posted by : at

Category : Notes


Notes from ‘A dual-encoder U-Net for landslide deetection using Sentinel-2 and DEM data.

What are they doing:

  • Semantic Segmentation: (A per-pixel classification) Landslide prediction.
  • A duel-encoder architecture that use both
  • Workflow to construct the landslide dataset???

Good for us:

  • They claim that the popularly used models are U-Net, Attention U-Net and Seg-Net (Which is still a little outdated) We can definitely develop a more landslide-science based model.

Bad for us:

Datasets Available: Semantic Segmentation Dataset: Google Earth images of 2.39m spatial resolution in Jinsha River basin and corresponding per-pixel annotations.

Iburi Landslide Dataset - Check out the original paper for data description.

Take-aways:

  • Since this paper is very recent (Published last week lol) we can rely on the related work section to a bit to understand background work so far.

There are majorly 5 methods used - ‘Visual Representation’ Example, ‘Change Detection-based’, ‘Knowledge-based’, ‘ML based’ and ‘DL based’.

Most ML methods (like SVMs, random forest, logistic regression, bayesian classifier and DST) have already been tried out but the issue is the high requirement for data preprocessing and feature engineering.

DL methods are majorly object-detection (using bounding boxes to locate landslides in a given image) and segmentation based (classifying pixel by pixel as landslide or not-landslide).

DL Object-detection (R-CNNs, Mask R-CNNs, YOLOs + Attention). Drawback: These only give a rectangular box around the landslide and not the exact boundary of the landslide.

DL Segmentation (FCN, U-Net, GCN, DeepLab on River Landslide dataset & LandsNet for Earthquake-triggered landslide detection)

Next Steps:

Run RCNN on Instance Segmentation. Define variations on RCNN architecture for Instance Segmentation. Take out YOLO v8 model (More YOLO models as well) from the official package and run it on our data (for 7 bands or more)

About Vihaan Akshaay

I recently completed my M.S. in Computer Science at the University of California, Santa Barbara, where I was mentored by Lei Li. My research focused on blending intuitive concepts with machine learning to tackle real-world challenges across diverse scales. Inspired by how humans approach solving the Rubik's Cube, I developed a novel algorithm under the guidance of Yu-Xiang Wang, introducing a bi-directional framework for goal conditioning in state-space search problems. Additionally, I proposed an edge-attention-based U-Net, drawing on how edges are used to annotate shorelines. In collaboration with Gen Li, I curated a large-scale landslide detection dataset by leveraging 40 years of Landsat imagery, contributing to AI for Earth and advancing the use of machine learning for environmental applications.

I earned my B.Tech in Mechanical Engineering and M.Tech in Robotics from IIT Madras, where my Master’s thesis on unsupervised behavior recognition in mice was guided by B. Ravindran and Dr. Vivek Kumar( The Jackson Laboratory). At IIT Madras, I led the iBot Robotics Club and worked on the ARTEMIS Railroad Crack Detection Robot , which won the International James Dyson Award . I also completed research internships, including analyzing the stability of Deep Q-Networks with Siva Theja Maguluri(Georgia Tech) and working on kernelized eDRVFLs, a type of deep randomized neural model, with P. N. Suganthan(NTU Singapore).

Useful Links