Laboratory_2

Lab_2


                                           

IMAGE SEGMENTATION AND NEAREST NEIGHBOR CLASSIFICATION



Introduction

There are numerous land cover and land use maps in the world. Most of them are demonstrated with remote sending skills. From the many of images throughout the remote sensing process, we interpret the map and get useful information to see the condition of the land and efficiency of the land use. Today I may introduce you one of the handiest remote sensing analysis method which allows us to implement complex processes within simple provided algorithms. 

Object Based Image Analysis

there are two different methods we can do land cover image classification based on satellite image. Typically, for the pixel-based classification which is applied maximum likelihood and ISODATA method, tends to have many misclassifications due to canopy of vegetation and shadows. Therefore, often time, for the pixel- based classification, it is somehow inconvenient to compile the ‘salt and pepper “effect.
Fortunately, because of these problems, we do have alternative called “Object-based image analysis or classification. It is computed system based on human object perception. We are allowed to use handy algorithms which contain adjusted parameters and we can execute the algorithms to get land cover data based on parameters we have made. Image object will be classified within adjusted pixel info and apply to homogeneous regions. The regions will be differentiated strictly if the adjacent pixels are significantly different.  

Objective

·         To be able to use segmentation algorithm and apply to your image.
·         To learn the parameters in algorithm of multiresolution segmentation and be able to distinguish different types of objects such as size and shape.
·         Should know how to complete and execute stopes in your process tree to complete your classification.
·         To be able to use nearest neighbor classification
·         To be able to add parameters and feature optimization
·         Finally, be able to run the classification, merge and export algorithms.  

Methodology

1.       Open the eCognition Developer and import the satellite image of panchromatic and add rest of the layers next.
2.       Edit the name of the layer as following in order, PAN, BLUE, GREEN, RED, NIR, SWIR1 AND SWIR2.
3.       Make a subset by selecting subset selection and input the parameter you are given.
 
Figure.1


4.       Setup your data display that you are willing to use. In this case we use multi-spectral display because panchromatic display typically hard to distinguish the color.
5.       Setup your process tree as following order, segmentation, classification, merge, and export.
6.       Each steps, we set algorithms as following, multi-resolution segmentation, classification, merge region and export vector layer by using insert child selection.  
7.       Make some classes that you would like to classify. In this case, classes would be forest, not vegetation, other vegetation and water.
8.       Setup your class hierarchy by clicking insert class. After that, you need to make standard nearest neighbor of mean of all the layers in different class that you are going to classifying.
9.       Execute the segmentation algorithm ONLY
1.   Select your sample by double clicking the specific segmented pixel that you would like to classified. (pick some of them in different part of the map)  
1.   Choose your feature in the “Feature Space Optimization” and add the mean value or additionally you can add standard deviation. Play around the different features that makes it more preferable. For example, see the graph and be noticed that separation distance value is high enough)
1.   Now, we all need to execute REST OF the algorithms in process tree as flowing, classification, merge and export.
    

RESULTS

 Multi-resolution segmentation is the one of the algorithms we need in the segmentation step. Within this algorithm, we are allowed to set the parameters such as scale, shape and compactness. Based on how you make threshold in your parameters, you will get different segmented pixel. you can weight on the layers we have given, based on which layer your pixel are willing to more weight on it. For the scale, large scale value allows you to get big size of segment and this is good when you are trying to analyze the big study area. If your study area is small and you want analyze species in specific region, it is good to use small scale value. For shape, if you weight more on shape such as .9, this mean you automatically will get only 10 percent of weighting on color. For our lab such as analysis of land cover, we are good to have low value on shape which means, it gives you high weighted on color. For compactness, it is similar with shape and color. If we more weight on compactness, the segmentation is likely to have more rigid and vice versa to the smoothness.

Among my 6 different segmentation that I have made, I would choose 10 scale, 0.1 shape and 0.5 compactness. This is because, based on the image I have been given and class features that I like to classified, I need to use small scale and more weight on color and decent percentage of the compactness.             



                                        Figure.2.1




                                  Figure 2.2



                            
                                Figure 2.3

                          
                           Figure 2.4



                         Figure 2.5



                    Figure 2.6

                    
                    Figure 3.1 (Mean feature)


            Figure 3.2 (Mean and Standard deviation features)




               Figure 4.1


          Figure 4.2


  
          Figure 4.3

Comments

Popular posts from this blog