OpenCV Orb non trova corrispondenze una volta introdotte invarianze di rotazione / scala

Sto lavorando a un progetto usando il rilevatore di feature Orb in OpenCV 2.3.1. Sto trovando corrispondenze tra 8 immagini diverse, 6 delle quali sono molto simili (20 cm di differenza nella posizione della telecamera, lungo un cursore lineare quindi non c’è scala o varianza di rotazione), e quindi 2 immagini prese da circa un angolo di 45 gradi da entrambi lato. Il mio codice sta trovando molte corrispondenze accurate tra le immagini molto simili, ma poche a nessuno per le immagini prese da una prospettiva più diversa. Ho incluso quelle che penso siano le parti pertinenti del mio codice, per favore fatemi sapere se avete bisogno di ulteriori informazioni.

// set parameters int numKeyPoints = 1500; float distThreshold = 15.0; //instantiate detector, extractor, matcher detector = new cv::OrbFeatureDetector(numKeyPoints); extractor = new cv::OrbDescriptorExtractor; matcher = new cv::BruteForceMatcher; //Load input image detect keypoints cv::Mat img1; std::vector img1_keypoints; cv::Mat img1_descriptors; cv::Mat img2; std::vector img2_keypoints cv::Mat img2_descriptors; img1 = cv::imread(fList[0].string(), CV_LOAD_IMAGE_GRAYSCALE); img2 = cv::imread(fList[1].string(), CV_LOAD_IMAGE_GRAYSCALE); detector->detect(img1, img1_keypoints); detector->detect(img2, img2_keypoints); extractor->compute(img1, img1_keypoints, img1_descriptors); extractor->compute(img2, img2_keypoints, img2_descriptors); //Match keypoints using knnMatch to find the single best match for each keypoint //Then cull results that fall below given distance threshold std::vector<std::vector > matches; matcher->knnMatch(img1_descriptors, img2_descriptors, matches, 1); int matchCount=0; for (int n=0; n 0){ if (matches[n][0].distance > distThreshold){ matches[n].erase(matches[n].begin()); }else{ ++matchCount; } } } 

Ho finito per ottenere un numero sufficiente di corrispondenze utili modificando il mio processo per filtrare le corrispondenze. Il mio metodo precedente stava scartando un sacco di buone partite basate unicamente sul loro valore di distanza. Questa class RobustMatcher che ho trovato nel libro di ricette per la programmazione di computer Vision OpenCV2 ha funzionato alla grande. Ora che tutte le mie partite sono esatte, sono riuscito a ottenere risultati abbastanza buoni rimbalzando sul numero di punti chiave che sta cercando il rilevatore di ORB. L’utilizzo di RobustMatcher con SIFT o SURF offre ancora risultati migliori, ma ora sto ottenendo dati utilizzabili con ORB.

 //RobustMatcher class taken from OpenCV2 Computer Vision Application Programming Cookbook Ch 9 class RobustMatcher { private: // pointer to the feature point detector object cv::Ptr detector; // pointer to the feature descriptor extractor object cv::Ptr extractor; // pointer to the matcher object cv::Ptr matcher; float ratio; // max ratio between 1st and 2nd NN bool refineF; // if true will refine the F matrix double distance; // min distance to epipolar double confidence; // confidence level (probability) public: RobustMatcher() : ratio(0.65f), refineF(true), confidence(0.99), distance(3.0) { // ORB is the default feature detector= new cv::OrbFeatureDetector(); extractor= new cv::OrbDescriptorExtractor(); matcher= new cv::BruteForceMatcher; } // Set the feature detector void setFeatureDetector( cv::Ptr& detect) { detector= detect; } // Set the descriptor extractor void setDescriptorExtractor( cv::Ptr& desc) { extractor= desc; } // Set the matcher void setDescriptorMatcher( cv::Ptr& match) { matcher= match; } // Set confidence level void setConfidenceLevel( double conf) { confidence= conf; } //Set MinDistanceToEpipolar void setMinDistanceToEpipolar( double dist) { distance= dist; } //Set ratio void setRatio( float rat) { ratio= rat; } // Clear matches for which NN ratio is > than threshold // return the number of removed points // (corresponding entries being cleared, // ie size will be 0) int ratioTest(std::vector > &matches) { int removed=0; // for all matches for (std::vector >::iterator matchIterator= matches.begin(); matchIterator!= matches.end(); ++matchIterator) { // if 2 NN has been identified if (matchIterator->size() > 1) { // check distance ratio if ((*matchIterator)[0].distance/ (*matchIterator)[1].distance > ratio) { matchIterator->clear(); // remove match removed++; } } else { // does not have 2 neighbours matchIterator->clear(); // remove match removed++; } } return removed; } // Insert symmetrical matches in symMatches vector void symmetryTest( const std::vector >& matches1, const std::vector >& matches2, std::vector& symMatches) { // for all matches image 1 -> image 2 for (std::vector >:: const_iterator matchIterator1= matches1.begin(); matchIterator1!= matches1.end(); ++matchIterator1) { // ignore deleted matches if (matchIterator1->size() < 2) continue; // for all matches image 2 -> image 1 for (std::vector >:: const_iterator matchIterator2= matches2.begin(); matchIterator2!= matches2.end(); ++matchIterator2) { // ignore deleted matches if (matchIterator2->size() < 2) continue; // Match symmetry test if ((*matchIterator1)[0].queryIdx == (*matchIterator2)[0].trainIdx && (*matchIterator2)[0].queryIdx == (*matchIterator1)[0].trainIdx) { // add symmetrical match symMatches.push_back( cv::DMatch((*matchIterator1)[0].queryIdx, (*matchIterator1)[0].trainIdx, (*matchIterator1)[0].distance)); break; // next match in image 1 -> image 2 } } } } // Identify good matches using RANSAC // Return fundemental matrix cv::Mat ransacTest( const std::vector& matches, const std::vector& keypoints1, const std::vector& keypoints2, std::vector& outMatches) { // Convert keypoints into Point2f std::vector points1, points2; cv::Mat fundemental; for (std::vector:: const_iterator it= matches.begin(); it!= matches.end(); ++it) { // Get the position of left keypoints float x= keypoints1[it->queryIdx].pt.x; float y= keypoints1[it->queryIdx].pt.y; points1.push_back(cv::Point2f(x,y)); // Get the position of right keypoints x= keypoints2[it->trainIdx].pt.x; y= keypoints2[it->trainIdx].pt.y; points2.push_back(cv::Point2f(x,y)); } // Compute F matrix using RANSAC std::vector inliers(points1.size(),0); if (points1.size()>0&&points2.size()>0){ cv::Mat fundemental= cv::findFundamentalMat( cv::Mat(points1),cv::Mat(points2), // matching points inliers, // match status (inlier or outlier) CV_FM_RANSAC, // RANSAC method distance, // distance to epipolar line confidence); // confidence probability // extract the surviving (inliers) matches std::vector::const_iterator itIn= inliers.begin(); std::vector::const_iterator itM= matches.begin(); // for all matches for ( ;itIn!= inliers.end(); ++itIn, ++itM) { if (*itIn) { // it is a valid match outMatches.push_back(*itM); } } if (refineF) { // The F matrix will be recomputed with // all accepted matches // Convert keypoints into Point2f // for final F computation points1.clear(); points2.clear(); for (std::vector:: const_iterator it= outMatches.begin(); it!= outMatches.end(); ++it) { // Get the position of left keypoints float x= keypoints1[it->queryIdx].pt.x; float y= keypoints1[it->queryIdx].pt.y; points1.push_back(cv::Point2f(x,y)); // Get the position of right keypoints x= keypoints2[it->trainIdx].pt.x; y= keypoints2[it->trainIdx].pt.y; points2.push_back(cv::Point2f(x,y)); } // Compute 8-point F from all accepted matches if (points1.size()>0&&points2.size()>0){ fundemental= cv::findFundamentalMat( cv::Mat(points1),cv::Mat(points2), // matches CV_FM_8POINT); // 8-point method } } } return fundemental; } // Match feature points using symmetry test and RANSAC // returns fundemental matrix cv::Mat match(cv::Mat& image1, cv::Mat& image2, // input images // output matches and keypoints std::vector& matches, std::vector& keypoints1, std::vector& keypoints2) { // 1a. Detection of the SURF features detector->detect(image1,keypoints1); detector->detect(image2,keypoints2); // 1b. Extraction of the SURF descriptors cv::Mat descriptors1, descriptors2; extractor->compute(image1,keypoints1,descriptors1); extractor->compute(image2,keypoints2,descriptors2); // 2. Match the two image descriptors // Construction of the matcher //cv::BruteForceMatcher> matcher; // from image 1 to image 2 // based on k nearest neighbours (with k=2) std::vector > matches1; matcher->knnMatch(descriptors1,descriptors2, matches1, // vector of matches (up to 2 per entry) 2); // return 2 nearest neighbours // from image 2 to image 1 // based on k nearest neighbours (with k=2) std::vector > matches2; matcher->knnMatch(descriptors2,descriptors1, matches2, // vector of matches (up to 2 per entry) 2); // return 2 nearest neighbours // 3. Remove matches for which NN ratio is // > than threshold // clean image 1 -> image 2 matches int removed= ratioTest(matches1); // clean image 2 -> image 1 matches removed= ratioTest(matches2); // 4. Remove non-symmetrical matches std::vector symMatches; symmetryTest(matches1,matches2,symMatches); // 5. Validate matches using RANSAC cv::Mat fundemental= ransacTest(symMatches, keypoints1, keypoints2, matches); // return the found fundemental matrix return fundemental; } }; // set parameters int numKeyPoints = 1500; //Instantiate robust matcher RobustMatcher rmatcher; //instantiate detector, extractor, matcher detector = new cv::OrbFeatureDetector(numKeyPoints); extractor = new cv::OrbDescriptorExtractor; matcher = new cv::BruteForceMatcher; rmatcher.setFeatureDetector(detector); rmatcher.setDescriptorExtractor(extractor); rmatcher.setDescriptorMatcher(matcher); //Load input image detect keypoints cv::Mat img1; std::vector img1_keypoints; cv::Mat img1_descriptors; cv::Mat img2; std::vector img2_keypoints cv::Mat img2_descriptors; std::vector > matches; img1 = cv::imread(fList[0].string(), CV_LOAD_IMAGE_GRAYSCALE); img2 = cv::imread(fList[1].string(), CV_LOAD_IMAGE_GRAYSCALE); rmatcher.match(img1, img2, matches, img1_keypoints, img2_keypoints); 

Ho avuto un problema simile con opencv python e sono arrivato qui tramite google.

Per risolvere il mio problema ho scritto il codice Python per il matching-filtering basato sulla soluzione @KLowes. Lo condividerò qui nel caso in cui qualcun altro abbia lo stesso problema:

 """ Clear matches for which NN ratio is > than threshold """ def filter_distance(matches): dist = [m.distance for m in matches] thres_dist = (sum(dist) / len(dist)) * ratio sel_matches = [m for m in matches if m.distance < thres_dist] #print '#selected matches:%d (out of %d)' % (len(sel_matches), len(matches)) return sel_matches """ keep only symmetric matches """ def filter_asymmetric(matches, matches2, k_scene, k_ftr): sel_matches = [] for match1 in matches: for match2 in matches2: if match1.queryIdx < len(k_ftr) and match2.queryIdx < len(k_scene) and \ match2.trainIdx < len(k_ftr) and match1.trainIdx < len(k_scene) and \ k_ftr[match1.queryIdx] == k_ftr[match2.trainIdx] and \ k_scene[match1.trainIdx] == k_scene[match2.queryIdx]: sel_matches.append(match1) break return sel_matches def filter_ransac(matches, kp_scene, kp_ftr, countIterations=2): if countIterations < 1 or len(kp_scene) < minimalCountForHomography: return matches p_scene = [] p_ftr = [] for m in matches: p_scene.append(kp_scene[m.queryIdx].pt) p_ftr.append(kp_ftr[m.trainIdx].pt) if len(p_scene) < minimalCountForHomography: return None F, mask = cv2.findFundamentalMat(np.float32(p_ftr), np.float32(p_scene), cv2.FM_RANSAC) sel_matches = [] for m, status in zip(matches, mask): if status: sel_matches.append(m) #print '#ransac selected matches:%d (out of %d)' % (len(sel_matches), len(matches)) return filter_ransac(sel_matches, kp_scene, kp_ftr, countIterations-1) def filter_matches(matches, matches2, k_scene, k_ftr): matches = filter_distance(matches) matches2 = filter_distance(matches2) matchesSym = filter_asymmetric(matches, matches2, k_scene, k_ftr) if len(k_scene) >= minimalCountForHomography: return filter_ransac(matchesSym, k_scene, k_ftr) 

Per filtrare le corrispondenze filter_matches(matches, matches2, k_scene, k_ftr) deve essere chiamato dove matches, matches2 rappresentano match ottenuti da orb-matcher e k_scene, k_ftr sono keypoints corrispondenti.

Non penso che ci sia qualcosa di molto sbagliato nel tuo codice. Dalla mia esperienza l’ORB di opencv è sensibile alle variazioni di scala.

Probabilmente puoi confermarlo con un piccolo test, creare alcune immagini solo con rotazione e alcune con solo variazioni di scala. Probabilmente quelli di rotazione corrisponderanno bene, ma quelli in scala no (penso che la scala decrescente sia la peggiore).

Ti consiglio anche di provare la versione opencv dal trunk (vedi il sito di opencv per le istruzioni di compilazione), ORB come aggiornato dal 2.3.1 e funziona un po ‘meglio ma ha ancora quei problemi di scala.