cpfind [options] -k i0 -k i1 [...] project.pto
cpfind [options] --kall project.pto
The first step is the feature description: In this step the images of the project file are loaded and so called keypoints are searched. They describe destinctive features in the image. cpfind uses a gradient based descriptor for the feature description of the keypoints.
In a second step, the feature matching, all keypoints of two images are matched against each other to find features which are on both images. If this matching was successful two keypoints in the two images become one control point.
cpfind --celeste -o output.pto input.pto
Using cpfind with integrated celeste should be superior against using cpfind and celeste_standalone sequential. When running cpfind with celeste areas of clouds, which often contains keypoints with a high quality measure, are disregarded and areas without clouds are used instead. When running cpfind without celeste also keypoints on clouds are found. When afterwards running celeste_standalone these control points are removed. In the worst case all control points of a certain image pair are removed.
So running cpfind with celeste leads to a better ``control point quality'' for outdoor panorama (e.g. panorama with clouds). Running cpfind with celeste takes longer than cpfind alone. So for indoor panorama this option does not need to specified (because of longer computation time).
The celeste step can be fine tuned by the parameters --celesteRadius and --celesteThreshold.
This is the default matching strategy. Here all image pairs are matched against each other. E.g. if your project contains 5 images then cpfind matches the image pairs: 0-1, 0-2, 0-3, 0-4, 1-2, 1-3, 1-4, 2-3, 2-4 and 3-4
This strategy works for all shooting strategy (single-row, multi-row, unordered). It finds (nearly) all connected image pairs. But it is computational expensive for projects with many images, because it test many image pairs which are not connected.
Linear match
This matching strategy works best for single row panoramas:
cpfind --linearmatch -o output.pto input.pto
This will only detect matches between adjacent images, e.g. for the 5 image example it will matches images pairs 0-1, 1-2, 2-3 and 3-4. The matching distance can be increased with the switch --linearmatchlen. E.g. with --linearmatchlen 2 cpfind will match a image with the next image and the image after next, in our example it would be 0-1, 0-2, 1-2, 1-3, 2-3, 2-4 and 3-4.
Multirow matching
This is an optimized matching strategy for single and multi-row panorama:
cpfind --multirow -o output.pto input.pto
The algorithm is the same as described in multi-row panorama. By integrating this algorithm into cpfind it is faster by using several cores of modern CPUs and don't caching the keypoints to disc (which is time consuming). If you want to use this multi-row matching inside hugin set the control point detector type to All images at once.
Keypoints caching to disc
The calculation of keypoints takes some time. So cpfind offers the possibility to save the keypoints to a file and reuse them later again. With --kall the keypoints for all images in the project are saved to disc. If you only want the keypoints of particular image use the parameter -k with the image number:
cpfind --kall input.pto cpfind -k 0 -k 1 input.pto
The keypoint files are saved by default into the same directory as the images with the extension .key. In this case no matching of images occurs and therefore no output project file needs to specified. If cpfind finds keyfiles for an image in the project it will use them automatically and not run the feature descriptor again on this image. If you want to save them to annother directory use the --keypath switch.
This procedure can also be automate with the switch --cache:
cpfind --cache -o output.pto input.pto
In this case it tries to load existing keypoint files. For images, which don't have a keypoint file, the keypoints are detected and save to the file. Then it matches all loaded and newly found keypoints and writes the output project.
If you don't need the keyfile longer, the can be deleted automatic by
cpfind --clean input.pto
The feature description step can be fine-tuned by the parameters:
KDTree: distance of 2nd match (default: 0.25)
Cpfind stores maximal sieve1width * sieve1height * sieve1size keypoints per image. If you have only a small overlap, e.g. for 360 degree panorama shoot with fisheye images, you can get better results if you increase sieve1size. You can also try to increase sieve1width and/or sieve1height.
hom: Assume a homography. Only applicable for non-wide angle
views. Uses the original panomatic code. It is also more flexible
than required and can generate false matches, particularly if most
of the matches are located on a single line.
rpy: Align images using roll, pitch and yaw. This requires a good
estimate for the horizontal field of view (and distortion, for
heavily distorted images). It is the preferred mode if a
calibrated lens is used, or the HFOV could be read successfully
from the EXIF data.
rpyv: Align pair by optimizing roll, pitch, yaw and field of
view. Should work without prior knowledge of the field of view,
but might fail more often, due to error function used in the
panotools optimizer, it tends to shrink the fov to 0.
rpyvb: Align pair by optimizing roll, pitch, yaw, field of view and
the ``b'' distortion parameter. Probably very fragile, just
implemented for testing.
auto: Use homography for images with hfov < 65 degrees and rpy otherwise.
Cpfind generates between minmatches and sieve2width * sieve2height * sieve2size control points between an image pair. (Default setting is between 4 and 50 (=5*5*2) control points per image pair.) If less then minmatches control points are found for a given image pairs these control points are disregarded and this image pair is considers as not connected. For narrow overlaps you can try to decrease minmatches, but this increases the risk of getting wrong control points.