Average Accuracy
14 years ago
We have been able to accomplish this by making a compact representation of the building facade and road geometry for all the Street View panoramas using laser point clouds and differences between consecutive pictures.



As in the WWW paper, I tried a second round of RANSAC with a tighter threshold. The inliers and difference (between reference image and warped image) are shown below.
??? Attempted to access IndexSam(4997,:); index out of bounds because size(IndexSam)=[4996,4].
Error in ==> get_warps at 56
choice=IndexSam(count,:);
------------------------------------------------------------------------
Segmentation violation detected at Tue Jan 26 22:42:38 2010
------------------------------------------------------------------------
...
Stack Trace:
[0] creategraph.mexglx:0x0443688e(0x0bcb5660 "test1.graph", 0xad7bc010, 0xafcc1010 ", 0xa1cfa010)
...
Despite the error, some figures are produced, though it appears it just warped the images. See the output images below.
./smooth: /usr/local/matlabr2008b/sys/os/glnx86/libstdc++.so.6: version `GLIBCXX_3.4.9' not found (required by ./smooth)


The correspondences here are good, so the SIFT correspondences obviously work ok for this type of image. For images containing more of the original scene, I tried varying the threshold parameter of the matching function. A higher threshold seems to work better, see results below.
This did improve the quality of the correspondences. However, there are hardly no correspondences on the foreground object (the person and the bike), so this might make the motion segmentation method fail.
This is what you would expect to see, so it appears the poor results on the google streetview data is not because of a bug in my code.