Abstract:
The content of the article focuses on calculating optical flow from a series of images. Two techniques are presented for solving this challenging computational problem. These techniques play a crucial role in various fields of computer vision, including object tracking, scene analysis, micro- and macro-motion detection, facial expression recognition, and more. Both techniques complement each other: the first technique, which uses fast convolutions, is best for calculating the video stream across all pixels in an image; the second technique, which relies on robust estimates for linear regression parameters, is better suited for point configurations. It is recommended to perform pre-processing with the first technique to minimize contrast effects, whereas the image quality has little impact on the results with the second technique due to its robust estimates. By their nature, these methods are related to variational approaches for calculating optical flow. However, they differ significantly from the methods described in the literature in terms of speed and accuracy. These methods do not require the use of deep learning, so they can be applied without a large training dataset for methods that utilize deep neural networks for optical flow computation. The results obtained on grayscale images can easily be extended to color images, and most importantly, to systems of secondary features that have recently been used in computer vision.