1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267
|
/**
\page tutorial-megapose-model Tutorial: Exporting a 3D model to MegaPose after reconstruction with NeRF
\tableofcontents
This tutorial follows \ref tutorial-tracking-megapose and details how to export your 3D model for usage with MegaPose.
We will use Blender to inspect our model, correct it and export it to a format readable by MegaPose.
Blender is free and open source software. It is available for Linux, Windows and Mac and is available <a href="https://www.blender.org/">here</a>.
The steps followed and the displayed images are those for Blender 3.4.1.
This tutorial does not focus on creating the 3D model by hand, but rather on the export specifics for the model to best work with MegaPose.
We also provide a step by step guide on how to create a 3D model from images of the object with Neural Radiance Fields (NeRFs).
MegaPose supports multiple file formats: .ply, .obj (with .mtl and textures), .gltf, .glb, .egg, .bam.
\section megapose_model_prep 3D Model preparation
Let us now see how to prepare the 3D model and view it in Blender.
If you do not have a 3D model of your object and wish to scan it look at the next section.
If you already have your model, you can directly go to \ref megapose_model_import.
\subsection megapose_model_reconstruction_nerf 3D model reconstruction with Neural Radiance Fields
Neural Radiance Fields are a way to implicitly represent a 3D scene.
At their core, NeRFs learn a function (parametrized by a neural network) that maps a 3D ray to an RGB color (radiance) and a density (space occupancy).
Framing the problem of rendering in this way allows to take into account factors such as lighting variations (speculars, reflections).
While NeRFs are able to generate high quality renderings, we will focus on their ability to capture a dense representation of the scene geometry, as well as texture object texture.
In this tutorial, we will use <a href="https://docs.nerf.studio/en/latest/index.html">nerfstudio</a> to train a NeRF and recover the geometry.
Note that while NeRF may not be entirely accurate (both in geometry and texture), we found in our tests that the generated 3D mesh often works well to get an estimate of an object's pose with MegaPose.
\subsubsection megapose_reconstruction_requirements Requirements and installation
Since NeRFs are based on deep learning, they require a GPU to be trainable.
To install nerfstudio you have two options:
- In a virtual environment, with conda
- In a container, with docker
If you choose to install with conda, you should create a new environment in order to avoid potential dependency clashes with your other work.
To set up nerfstudio, please follow the installation procedure described here: <https://docs.nerf.studio/en/latest/quickstart/installation.html>
\subsubsection megapose_reconstruction_acquisition Data acquisition
To train a NeRF and obtain a 3D mesh, we require images of the object.
Nerfstudio supports multiple data acquisition pipelines. You can for instance acquire images on your phone with an application such as KIRI Engine, and feed its results to nerfstudio.
For more information, see the [available custom data formats](https://docs.nerf.studio/en/latest/quickstart/custom_dataset.html).
In this tutorial, we will focus on using a basic image sequence, a folder containing a set of `.png` images.
To capture those images, you can use a program such as the ViSP grabber tutorial for your camera: \ref tutorial-grabber.
For example, to capture images with a Realsense:
\code
$ IMAGES_DIR=castle_data
$ cd $VISP_WS/visp-build/tutorial/grabber
$ ./tutorial-grabber-realsense --record 1 --seqname ${IMAGES_DIR}/I%05d.png --width 1920 --height 1080
\endcode
The `--record 1`, will record only the images when you click: you should avoid providing blurry or irrelevant images.
It is recommended to have at least 50 images of your object, acquired at various viewpoints for a NeRF to be trainable.
\warning When capturing images, do not move the object. Rather, move around the object to acquire the different viewpoints. Moreover, try to have uniform lighting and avoid shadows to get the best results.
Once you have acquired your images, you need to estimate the camera pose associated to each image. NerfStudio provides some utils to do this.
This data processing script is backed by either <a href="https://colmap.github.io/">colmap</a> or <a href="https://github.com/cvg/Hierarchical-Localization">hloc</a>.
As such, either one of those needs to be installed. For colmap installation instructions, see <a href="https://docs.nerf.studio/en/latest/quickstart/custom_dataset.html#installing-colmap">The nerfstudio documentation</a>.
Note that colmap can also provide dense reconstructions, but also requires a GPU.
To process the data, to give it as an input to the NeRF model, run:
\code
(nerfstudio) $ NERF_DATA_DIR=castle_data_processed
(nerfstudio) $ ns-process-data images --data $IMAGES_DIR --output-dir $NERF_DATA_DIR --feature-type superpoint
\endcode
Here, we use superpoint as the keypoint extraction method. This method requires hloc to be installed.
\subsubsection megapose_reconstruction_training Training a NeRF
\code
(nerfstudio) $ NERF_DIR=castle_nerf
(nerfstudio) $ ns-train nerfacto --pipeline.model.predict-normals True --data $NERF_DATA_DIR --output-dir $NERF_DIR
\endcode
\subsubsection megapose_reconstruction_conversion Converting NeRF representation to .obj
\code
(nerfstudio) $ NERF_MODEL_DIR=castle_nerf
(nerfstudio) $ ns-export poisson --load-config ${NERF_DIR}/${NERF_DATA_DIR}/nerfacto/2023-06-13_171130/config.yml --output-dir $NERF_MODEL_DIR
\endcode
Where the full path to the `config.yml` will change, depending on the date at which you train your model.
Other export methods are available, see <a href="https://docs.nerf.studio/en/latest/quickstart/export_geometry.html"> the documentation on ns-export</a>.
You can specify many parameters that will directly impact the quality of the model:
- Number of generated points
- Target number of faces for your mesh
- 3D bounding box for which to generate the mesh
- Texture resolution.
Excluding image acquisition and installation, this process takes around 40 minutes with a Quadro RTX 6000.
\subsection megapose_model_import Importing the model in blender
Once you have your model, you should import it into Blender to verify/fix several things.
To import your model:
- Open Blender
- Create a new "General" scene, delete the default cube, camera and light
- In the top left of the blender window, go to "File > Import" and select the format of your mesh (for example, .obj)
- Navigate to your model file and double click: it should now be visible in your viewport
\image html tutorial/tracking/megapose/blender_import.png
If you have a textured object, make sure that the texture was imported and is visible in Blender.
To do so, switch the viewport rendering to shaded mode.
\image html tutorial/tracking/megapose/viewport_mode.png
\section megapose_model_ready Exporting to megapose
\subsection megapose_model_cleanup Step 1: Cleaning up the model
If you generated your model with a NeRF, it may contain extraneous geometry, corresponding to the environment in which you performed your scan.
You should remove this geometry, leaving only the object of interest in the mesh.
Your imported mesh may look like this:
\image html tutorial/tracking/megapose/model_after_import.png
To start, you should realign your mesh with the world frame. This makes it easier to move around and manipulate the mesh.
To manipulate your mesh cleanly, you should be in **object mode**, you can check that it is the case in the top left of the viewport.
To rotate your mesh, press **R**. To restrict the rotation to a single axis, you can press **X**, **Y** or **Z** after pressing **R**.
To move your mesh, press **G**. The axis restriction principle is the same as for rotation.
For a more accurate placement and orientation, use the front, top and side orthographic views.
You can do so by using the numbers of the NumPad, or by using the gimbal in the top right of the viewport.
To clean your mesh, select it (left click) and go into edit mode by pressing **Tab**. You should see the vertices that compose the mesh.
Next, press **W** to have the "paint selection" tool enabled. You can now select the different vertices by dragging your mouse across the mesh.
Select all the unwanted geometry, then press **X** and select the appropriate option (by default, use delete vertices).
To make the process easier and less error prone, delete the geometry in multiple steps. If, by mistake, you clear your selection, you can recover by using **CTRL+Z**.
If you go into wireframe view, the selection tool will pick up all vertices, whether they are occluded by other faces or not. This makes it faster to delete large portions of geometry.
\image html tutorial/tracking/megapose/deleting_wireframe.png
Once you have cleaned up your model, you should have something similar to this:
\image html tutorial/tracking/megapose/model_cleaned.png
\subsection megapose_model_scale Step 2: Ensuring correct scale
If your model was built from a NeRF or was created with another software, you should ensure that the dimensions of the object are correct.
The MegaPose server expects a mesh with units in meters. Some formats and software use millimeters by default.
If you used a NeRF to generate your model and you only used RGB images as inputs, then the scale information will probably be wrong.
To check the scale of your object, you can view the item scale in the "Item panel" in the top right corner of the viewport. To open this panel, press **N**.
\image html tutorial/tracking/megapose/wrong_dimensions.png
At the bottom of this panel, you can see the dimensions of your model. In the example above, our model is not 77cm wide, so we need to correct it.
The easiest way to find the scale of your object is to take measurements. In Blender, you can do so by selecting the ruler tool:
\image html tutorial/tracking/megapose/ruler_tool.png
It is also advisable to use vertex snapping to make measuring easier. Snapping can be enabled at the top of the viewport:
\image html tutorial/tracking/megapose/snapping.png
Note that you can select on what to snap: vertices, edges or faces.
To measure, click on the starting point and drag until you reach the endpoint of your measurement.
Rotate around the model to ensure that the ruler is well placed.
On our model, this is what it looks like:
\image html tutorial/tracking/megapose/megapose-measuring.png
Next, compare the measurements with the real ones to compute a rescaling factor (divid the true measurements by the ones in blender).
\warning Before starting the rescaling steps, you should apply the transformations (so that scaling is performed on the correct axes and does not distort the model). To do so, press **CTRL+A** and click on "All transforms".
To scale, the model, first ensure that it is selected by left clicking on it.
Then, you can either:
- Press **S**, then type "your_scale" (for example, "*0.45") and left click to confirm.
- Adjust the dimensions or scale in the "Item panel". The input fields support mathematical expressions, so you can multiple each dimension by the scale factor.
\subsection megapose_model_origin Step 3: Selecting the origin of your model
When exporting to a format such as `.obj` or `.ply`, the object origin defined in Blender (the orange dot visible when you select your model) is lost.
What matters then is to place your object relative to Blender's world frame, at the axes intersections, at (0,0,0).
To set the origin of the object, move it around with **G**, or use the "Location" field in the "Item" panel.
\warning Using the same model but with different origins will impact the results of MegaPose. We observed better results when placing the geometric center at the world's origin.
Here is an example of a model that is not well placed: it is too far from the world origin. The results may be impacted and the returned poses will not be useful.
\image html tutorial/tracking/megapose/megapose-bad-origin-model.png
Placing the model correctly, here is what we obtain:
\image html tutorial/tracking/megapose/megapose-good-origin-model.png
\subsection megapose_model_texture Optional step: baking textures
\warning This step should only be performed if your object is textureless: it has a uniform color (e.g. it is red or green).
If you have a model and wish to bake the textures onto the model itself. You can do so by following these steps.
First, switch the rendering engine to Cycles in the editor panel (by default, on the right side of the Blender window):
\image html tutorial/tracking/megapose/megapose-cycles.png
Then, **Select your object** and open the "Vertex attributes" panel to add a color attribute. When pressing plus, click OK and leave the default options untouched.
\image html tutorial/tracking/megapose/megapose-color-attr.png
Finally, to bake the material in the object data: follow these steps:
- Go to the rendering panel and into the "Bake" menu
- Set the bake type to diffuse
- Uncheck "Direct" and "Indirect"
- Set the output to "Active Color attribute"
\image html tutorial/tracking/megapose/megapose-bake-color-attr.png
\warning When exporting a baked model, remember to check the "Color" export option. This is not yet required, but will be in the last step of the tutorial.
\subsection megapose_final_model_export Final step: exporting to a MegaPose-readable format
We are now ready to export the model. Here, we will focus on exporting to the .obj format which is widely supported and works well with MegaPose.
Before exporting, save your blender file, and select your model.
Then, go to **File > export > Wavefront (.obj)**
A new dialog window should open. Select the folder where you wish to store the 3D model, and give it a name.
Before confirming the export, there are a few options on right side of the window that are important:
- Limit to selected only: Check this if you have other objects in your scene
- Forward and up axes: modify the orientation of your object.
- Path mode: How to reference the texture files. We recommend "copy" in order to keep your export and .blend file separate. This will store the texture in the same folder as your exported object.
If you move your object, you should also move the .mtl file and the texture
- Geometry > Export > Colors: Check this if you have baked the texture onto the object.
\image html tutorial/tracking/megapose/megapose-obj-export.png
\section megapose_model_common_errors Common issues
When using with MegaPose, you may encounter multiple issues:
- The server crashes with an assertion error about `meshes.sample_points` not having enough points:
- To fix this error, the total number of vertices of all your objects that are watched by the MegaPose server should be more than 2000.
- To artificially add vertices without modifying, you can perform subdivision on your mesh: go into edit mode (**Tab**), select all your geometry (**A**), then right click and select subdivide.
- The exported model has no texture
- If you have no image texture, follow the steps in \ref megapose_model_texture and ensure that you select "Color" when exporting to .obj.
- Note that exporting colors to .ply seems not to be working
- If you have an image texture
- Watch the output of the MegaPose server, a warning may be raised if the texture cannot be loaded.
- If this is the case, open the `.mtl` file linked to your model. Make sure that the path to the image texture is correct.
- If the issue is still present after fixing the .mtl file, open the .obj and look for a line containing something like
\code
mtllib path_to.mtl
\endcode
- The model looks different/darker when exporting to `.gltf` or `.glb`
- MegaPose relies on Panda3D to perform 3D rendering. Using gltf (or their binary version, glb) files requires a different rendering pipeline which is not the default choice for megapose
- Prefer an export to .obj
\section megapose_model_testing Next steps
- To test whether your model is correct and can be used for pose estimation or tracking, you should follow \ref tutorial-tracking-megapose and run the sample program.
- Since Megapose requires a way to detect the object, you can train a neural network to do so entirely from synthetic data by using the 3D model that you just obtained. The process is detailed in \ref tutorial-synthetic-blenderproc.
*/
|