This method requires RGB texture and depth texture of the cube map.
When deploying like the VR180 camera described above, the parallax disappears as going to the left and right.
Here, we correct so that parallax appears even if it faces left and right.
Move the center of the left and right eyes as the center of rotation and change the direction and shift it by IPD.
"L" is Left, "R" is Right.
The +Z/-Y/+Y plane as a cube map is the same as when deploying like a VR180 camera, and the centers of the left and right cameras are unchanged.
The -X/+X plane has the same orientation as the VR180 camera, but the centers of the left and right cameras are changing.
When the cube map faces is rendered, since the viewing angle is set to be larger than 90 degrees to provide overlapping parts,
Combining overlapping parts makes it possible to make the boundary inconspicuous.
Simply combining will result in the following blurring.
This state becomes conspicuous when seen with VR-HMD.
In order to alleviate this, Weight was given to the boundary so as to displace it.
I will call this "
Stitch with border weight".
The boundary allocates a fixed value at the angle of the equirectangular sphere projection on the screen.
The lower image is made red as it is closer to the -X/+X planes, and it is made blue as it is closer to the +Z/-Y/+Y planes.
Assuming that the closer to the -X/+X planes, the closer to 1.0, the closer to the +Z/-Y/+Y planes, the closer to 0.0,
We set the part which is not the boundary part to the weight value of 0.0.
Use the cube map faces as it is at the place which is not the boundary (where the weight value is 0.0).
Correction processing is performed at the boundary.
The bottom image is a view from the top.
cPos is the center of the left eye camera.
The boundary of the cube map surfaces is shown in green.
Let wPos be the position on world coordinates scanned on the hemisphere, centered on cPos.
This wPos is located far enough from cPos.
Let pWz be the position where the straight line and the +Z plane intersect from cPos to wPos.
Also, when scanning the -X plane, the position and orientation of the camera change.
Let camera position at this time be cPosMX.
Let pWx be the position where the line on the straight line from cPosMX toward wPos and the -X plane intersect.
The intersection point pWz of the +Z plane viewed from cPos,
The intersection point pWX of the -X plane seen from cPosMX,
From this information, interpolate based on the weight value on the boundary.
When the weight value is close to 0.0, adopt the pixel value of the intersection point pWz of the +Z plane.
When the weight value is close to 1.0, adopt the pixel value of the intersection pWx of the -X plane.
Since pWz is a projection on the +Z plane and pWx is on the -X plane, we convert it to world coordinates with reference to the depth buffer.
In this case, if both are to reach the background, a simple composition is performed.
float3 sPos1 = Convert wPos to texture position on +Z plane seen from cPos;
float3 sPos2 = Convert wPos to texture position on -X plane seen from cPosMX;
float3 col1 = tex2D(Texture of +Z plane, sPos1.xy).rgb;
float3 col2 = tex2D(Texture of -X plane, sPos2.xy).rgb;
float3 col = (col1 + col2) * 0.5;
Pixels of the background far from the camera are hardly affected by the difference in position such as parallax between the left and right of the camera and cPos / cPosMX.
From the depth buffer on the -X plane,
Calculate the position iwPos that collides with the object from the position of pWx.
iwPos is the position on world coordinates.
Calculate where the iwPos is projected on the +Z plane.
// Distance to the background far away from the camera.
float _FarDistance = 500.0;
// The difference distance of the center of the cameras.
float3 dd = cPos - cPosMX;
// Convert iwPos to the world coordinate position as seen from the cPos center camera.
float3 iwPos2 = normalize((iwPos - cPosMX) + dd) * _FarDistance + cPos;
In the camera (center cPosMX) that projects the -X plane,
iwPos will change to iwPos2.
However, this iwPos2 is a guessed position, not necessarily the correct position.
Let w be the weight value of the boundary.
The weight value is 1.0 for parts near the -X/+X plane, and 0.0 for places close to the +Z/-Y/+Y plane.
Calculate the interpolated world coordinates at the camera position cPosMX (facing the -X plane) at this time and also calculate the pixel value at that time.
// Interpolate the world position on the boundary.
float3 wPos2 = iwPos * w + iwPos2 * (1.0 - w);
float3 sPos = Convert wPos2 to texture position on -X plane seen from cPosMX;
float3 col2 = tex2D(Texture of -X plane, sPos.xy).rgb;
The color that could be calculated at this time is col2.
This corresponds to the color when the weight value changes from 1.0 to 0.0.
Since it will be incorrect when approaching 0.0,
Next, do the same processing on the +Z plane in the opposite direction.
From the depth buffer on the +Z plane,
Calculate the position iwzPos that collides with the object from the position of pWz.
iwzPos is the position on the world coordinates.
Calculate to which position the iwzPos is projected on the -X plane.
// Distance to the background far away from the camera.
float _FarDistance = 500.0;
// The difference distance of the center of the cameras.
float3 dd = cPos - cPosMX;
// Convert iwzPos to the world coordinate position as seen from the cPosMX center camera.
float3 iwzPos2 = normalize((iwzPos - cPos) - dd) * _FarDistance + cPosMX;
In the camera (center cPos) projecting the +Z plane,
iwzPos will change to iwzPos2.
However, this iwzPos2 is a guessed position, not necessarily the correct position.
Calculate the interpolated world coordinates at the camera position cPos (facing the +Z plane) at this time and also calculate the pixel value at that time.
// Interpolate the world position on the boundary.
float3 wzPos2 = iwzPos * (1.0 - w) + iwzPos2 * w;
float3 sPos = Convert wzPos2 to texture position on +Z plane seen from cPos;
float3 col1 = tex2D(Texture of +Z plane, sPos.xy).rgb;
The color that could be calculated at this time is col1.
This corresponds to the color when the weight value changes from 0.0 to 1.0.
The following two colors have been sought.
- The color col2 viewed from the -X plane when the weight value changes from 1.0 to 0.0
- The color col1 viewed from the +Z plane when the weight value changes from 0.0 to 1.0
Combine this based on the weight value.
float3 col = col2 * w + col1 * (1.0 - w);
Calculated col corresponds to the interpolated color information on the boundary.
Comparing before and after correction, it changed as follows.
With this alone, there may be a gap left at the position close to the camera.
When the pixel value being scanned on the +Z/-Y/+Y plane is taken as baseCol,
When w is smaller than 0.5, perform the following synthesis process and overlap.
float3 sPos1 = Convert wPos to texture position on +Z plane seen from cPos;
float3 baseCol = tex2D(Texture of +Z plane, sPos1.xy).rgb;
if (w < 0.5) {
float w2 = w / 0.5;
col = baseCol * (1.0 - w2) + col * w2;
}
As described above, by using the rendering information (RGB + Depth) with different parallax (the center of the camera is different)
To some extent you can smooth the stitch.
If there is a background where depth can not be calculated, an error may appear.