The desktop version of GoPro Studio provides distortion correction facility. Once you have enabled linear mode on images then it is not possible to reproduce the wide angle type images. It leads to edge cutting in photos as well as the parts that are close to edges get stretched with bad impression. You can use Linear FOV feature with Timelapse Photo mode but it is not accessible with Timelapse video mode.
CAMERA LENS DISTORTION 1080P
It works only with 2.7k, 1080p and below that. Linear FOV mode cannot work with all frame rates and videos sizes. Once the lens distortion gets removed then images are directly saved to memory card.
CAMERA LENS DISTORTION SOFTWARE
If you switch to this mode while capturing your photos then the camera itself provides software level correction to fisheye distortion.
GoPro Hero5 Black is developed with various latest shooting modes and Linear FOV mode is one of those. The article below will provide you complete detail about the process of removing fisheye distortion. No matter whether you are shooting videos or working with photos, it is possible to remove fisheye distortion. But, in some situations, the fisheye look is really a strong disturbance for images/videos as the straight lines become bent and from central location frames gets oversized.
CAMERA LENS DISTORTION CODE
This allows to use a projection matrix like the one from this post: Įdit: For anyone interested to see the code in context, I have published the code in a small library the distortion shader is used in this example.One of the most popular features of GoPro devices is their wide angle look as it assists in generation of images with large FOV. It is important to note, that the distortion are calculated in z-normalized coordinates but are denormalized into view coordinates in the last line of distort. Texture_coordinate = texture_coordinate_in Light_direction = normalize(light_position - world_pos.xyz) Normal = mat3(transpose(inverse(model_matrix))) * normal_in lighting on world coordinates not distorted ones Gl_Position = projection_matrix * dist_pos Vec4 world_pos = model_matrix * local_pos denormalize for projection (which is a linear operation) (1 + dist_coeffs*r2 + dist_coeffs*r4 + dist_coeffs*r6) distort the real world vertices using the rational modelįloat r_dist = (1 + dist_coeffs*r2 +dist_coeffs*r4 + dist_coeffs*r6) Layout (location = 2) in vec2 texture_coordinate_in Here is the code that implements the OpenCV rational distortion model (see for the formulas): #version 330 core You might want to apply tessellation before distorting the image. Inspired by the VR community I implemented the distortion via vertex displacement.įor high resolutions this is computionally more efficient but requires a mesh with a good vertex density. P_f = FisheyeProjection(p') // custom fish eye projection Thus, I guess the following will work: # vertex shader But fragment shader version can be applied in my situation.
Their code implemented the distortion in both of fragment shader and geometry shader. This one is quite similar to my work but they didn't used distortion model.
In CV community, lens distortion usually occurs on the projected plane. But this one I guess the distortion model is different from the lens distortion model in Computer Vision community. I've found some open codes that put the distortion in the geometry shader. The problem is that I have no idea where to put the lens distortion. What I'm trying to do is render 2D scene at a given pose and do some visual odometry between the real image from a fisheye camera and the rendered image.Īs the camera has severe lens distortion, it should be considered in the rendering stage too. I'm trying to simulate lens distortion effect for my SLAM project.Ī scanned color 3D point cloud is already given and loaded in OpenGL.