Feature Matching with OpenCV

Abusheik A

2 min read

What is Object Detection?

Object detection is a Computer Vision (CV) Technique in which a software system can detect, locate, and trace the object from a given image or video. The special attribute of object detection is that it identifies the class of objects (eg. person, table, chair, etc.) and their location-specific coordinates in the given source.

You want to know how the location of the Object is pointed?

It is very simple. The location is pointed out by drawing a bounding box around the object. The bounding box may or may not accurately locate the position of the object.

The ability to locate the object inside an image defines the performance of the algorithm used for detection. Face detection is one example of object detection. The object detection algorithms might be pre-trained or can be trained from scratch.

What are classes and why it is important to video surveillance?

I’m going to discuss a few reasons why classes are important in Video Surveillance. Consider a worker who is working in an industry containing hazardous chemicals. They should wear all the safety equipment before he starts working.

Nowadays, many industries use AI systems to detect and alert the workers to wear all the precautionary if not it will raise a voice alert before entering the workspace.

Class Swapping:

In classes, there will be a major issue called Class Swapping. Unwanted or wrong alerts might occur when swapping in classes happens, which also affects the workspace and workers. So always remember, class validation plays a key important role before making any model.

before
(Before swapping)
(After swapping)

How to validate automatically by FLANN- based matcher?

The annotation classes may contain 5000+ images in which human validation is not possible. Therefore, we will use a method called Feature Matching to find the matches with search even in larger datasets.

“Feature matching refers to finding corresponding features from two similar images based on a search distance algorithm. One of the images is considered as the source & the other as target and the feature matching technique is used to either find or derive and transfer attributes from the source to the target image.”

FLANN-based Matcher:

FLANN stands for Fast Library for Approximate Nearest Neighbors. It contains a collection of algorithms optimized for fast nearest neighbor search in large datasets and for high dimensional features.

Implementations

To implement this FLANN-based matcher algorithm with our code, follow the below steps.

STEP 1: Create a folder for the demo project.

STEP 2: Place the models inside the demo project folder.

STEP 3: Place the appropriate videos in the same demo project folder.

STEP 4: Refer to this link for implementing object detection with OpenCV.

Output: The sample output is like the image attached below.

(Without Detection)
(With Detection)

STEP 5: To crop the classes, use the below-mentioned function.

def crop(img, result):
global count
count += 1
for cls, objs in result.items():
if not os.path.exists(os.path.join(save_path, cls)):
os.mkdir(os.path.join(save_path, cls))
for x1, y1, x2, y2 in objs:
img_crop = img[y1:y2, x1:x2]
img_save = save_path + "/" + cls + "/" + cls + "_" + str(count) + ".jpg"
try:
cv2.imwrite(img_save, img_crop)
except:
print("Error Image", img_save)
view raw gistfile1.txt hosted with ❤ by GitHub

STEP 6: The saved folder will contain.

(Person)
(coat)
(helmet)

STEP 7: Let’s start python code with a FLANN-based matcher.

import time
import cv2
import shutil
from tqdm import tqdm
import numpy as np
import os
#Add the path of cropped folder in which all the classes folder presents
cropped_folder_path = r"D:\Abu\My Learnings\Classes Validation\DataBox\Classes Validation\cropped_images"
#Add the path of error folder in which the error classes will be stored
error_folder_path = r"D:\Abu\My Learnings\Classes Validation\DataBox\Classes Validation\Error"
folder_dic, master_dic_full, master_dic, Images_to_compare, Title, Original_Title = {}, {}, {}, [], [], []
percentage_of_similarity, Percentage_of_Non_similarity = 0, 0
#By using swift and flann we will find the image similarity
sift = cv2.SIFT_create()
index_params = dict(algorithm=0, trees=1)
search_params = dict()
#In this loop we will be taking the count or no of folders which presents inside the cropped folder
for count, folder in enumerate(os.listdir(cropped_folder_path)):
folder_dic[folder] = count
if not os.path.exists(os.path.join(error_folder_path, folder)):
os.mkdir((os.path.join(error_folder_path, folder)))
#In these loops we will be matching one class with all other classes and move the images to the error folder
for main_folder, index in folder_dic.items():
time.sleep(1)
print("Main Folder ->", main_folder)
for master_images_path in os.listdir(os.path.join(cropped_folder_path, main_folder)):
master_images = cv2.imread(cropped_folder_path + "/" + main_folder + "/" + master_images_path)
for check_folder in os.listdir(cropped_folder_path):
if folder_dic[check_folder] > index:
time.sleep(2)
for check_image_path in tqdm(os.listdir(os.path.join(cropped_folder_path, check_folder))):
check_image = cv2.imread(cropped_folder_path + "/" + check_folder + "/" + check_image_path)
if master_images.shape == check_image.shape:
difference = cv2.subtract(master_images, check_image)
b, g, r = cv2.split(difference)
if cv2.countNonZero(b) == 0 and cv2.countNonZero(g) == 0 and cv2.countNonZero(r) == 0:
print("Similarity: 100%")
flann = cv2.FlannBasedMatcher()
kp_1, desc_1 = sift.detectAndCompute(master_images, None)
kp_2, desc_2 = sift.detectAndCompute(check_image, None)
matches = flann.knnMatch(desc_1, desc_2, k=2)
similar_images, Non_similar_images = [], []
for m, n in matches:
if m.distance < 0.75 * n.distance:
similar_images.append([m])
else:
Non_similar_images.append([m])
number_key_points = 0
if len(kp_1) <= len(kp_2):
number_key_points = len(kp_1)
else:
number_key_points = len(kp_2)
percentage_of_similarity = len(similar_images) / number_key_points * 100
Percentage_of_Non_similarity = len(Non_similar_images) / number_key_points * 100
if percentage_of_similarity > Percentage_of_Non_similarity:
print("Title: " + os.path.join(cropped_folder_path, master_images_path),
file=open("output.txt", "a"))
print("Original Title: " + os.path.join(cropped_folder_path, check_folder),
file=open(error_folder_path + "/" + check_folder + "/output.txt", "a"))
print("similarity:" + str(int(percentage_of_similarity)) + "%\n",
file=open(error_folder_path + "/" + check_folder + "/output.txt", "a"))
print("Non_similarity:" + str(int(Percentage_of_Non_similarity)) + "%\n",
file=open(error_folder_path + "/" + check_folder + "/output.txt", "a"))
print("Same classes occurred",
file=open(error_folder_path + "/" + check_folder + "/output.txt", "a"))
error_save_variable = error_folder_path + "/" + check_folder
if os.path.exists(os.path.join(error_save_variable, check_image_path)):
os.remove(os.path.join(error_save_variable, check_image_path))
else:
shutil.move((cropped_folder_path + "/" + check_folder + "/" + check_image_path),
error_save_variable)
view raw gistfile1.txt hosted with ❤ by GitHub

STEP 8: To run the code using the below command. And it will check for each class one by one.

Once it is completed the error images will be moved to the error folder within the particular class folder which is swapped with the output file.

Here, the helmet is detected as the safety vest is moved to the error folder so that we can validate and remove or correct the annotated image to fix these swapping issues.

In this blog, we have demonstrated how to work on match features and how to get good accuracy using the FLANN-based method. This can make a big impact in finding the misidentified data in each class which minimizes the manual work time. Similarly, we also have a Brute-Force matcher with ORB/SIFT detector.

Let me explain what Brute-Force matcher with ORB/SIFT detector is all about in my next blog. Until then, keep expecting the best from us as always!

Happy Testing!

Related posts:

One Reply to “Feature Matching with OpenCV”

Leave a Reply

Your email address will not be published. Required fields are marked *