Main MRPT website > C++ reference
MRPT logo
obs/CObservation3DRangeScan.h
Go to the documentation of this file.
1 /* +---------------------------------------------------------------------------+
2  | Mobile Robot Programming Toolkit (MRPT) |
3  | http://www.mrpt.org/ |
4  | |
5  | Copyright (c) 2005-2015, Individual contributors, see AUTHORS file |
6  | See: http://www.mrpt.org/Authors - All rights reserved. |
7  | Released under BSD License. See details in http://www.mrpt.org/License |
8  +---------------------------------------------------------------------------+ */
9 #ifndef CObservation3DRangeScan_H
10 #define CObservation3DRangeScan_H
11 
13 #include <mrpt/utils/CImage.h>
14 #include <mrpt/obs/CObservation.h>
16 #include <mrpt/poses/CPose3D.h>
17 #include <mrpt/poses/CPose2D.h>
18 #include <mrpt/math/CPolygon.h>
19 #include <mrpt/math/CMatrix.h>
20 #include <mrpt/utils/TEnumType.h>
21 #include <mrpt/utils/adapters.h>
22 
23 
24 namespace mrpt
25 {
26 namespace obs
27 {
28  DEFINE_SERIALIZABLE_PRE_CUSTOM_BASE_LINKAGE( CObservation3DRangeScan, CObservation,OBS_IMPEXP )
29 
30  namespace detail {
31  // Implemented in CObservation3DRangeScan_project3D_impl.h
32  template <class POINTMAP>
33  void project3DPointsFromDepthImageInto(CObservation3DRangeScan & src_obs,POINTMAP & dest_pointcloud,const bool takeIntoAccountSensorPoseOnRobot,const mrpt::poses::CPose3D * robotPoseInTheWorld,const bool PROJ3D_USE_LUT);
34  }
35 
36  /** Declares a class derived from "CObservation" that
37  * encapsules a 3D range scan measurement (e.g. from a time of flight range camera).
38  * This kind of observations can carry one or more of these data fields:
39  * - 3D point cloud (as float's).
40  * - 2D range image (as a matrix): Each entry in the matrix "rangeImage(ROW,COLUMN)" contains a distance or a depth (in meters), depending on \a range_is_depth.
41  * - 2D intensity (grayscale or RGB) image (as a mrpt::utils::CImage): For SwissRanger cameras, a logarithmic A-law compression is used to convert the original 16bit intensity to a more standard 8bit graylevel.
42  * - 2D confidence image (as a mrpt::utils::CImage): For each pixel, a 0x00 and a 0xFF mean the lowest and highest confidence levels, respectively.
43  *
44  * The coordinates of the 3D point cloud are in meters with respect to the depth camera origin of coordinates
45  * (in SwissRanger, the front face of the camera: a small offset ~1cm in front of the physical focal point),
46  * with the +X axis pointing forward, +Y pointing left-hand and +Z pointing up. By convention, a 3D point with its coordinates set to (0,0,0), will be considered as invalid.
47  * The field CObservation3DRangeScan::relativePoseIntensityWRTDepth describes the change of coordinates from
48  * the depth camera to the intensity (RGB or grayscale) camera. In a SwissRanger camera both cameras coincide,
49  * so this pose is just a rotation (0,0,0,-90deg,0,-90deg). But in
50  * Microsoft Kinect there is also an offset, as shown in this figure:
51  *
52  * <div align=center>
53  * <img src="CObservation3DRangeScan_figRefSystem.png">
54  * </div>
55  *
56  * In any case, check the field \a relativePoseIntensityWRTDepth, or the method \a doDepthAndIntensityCamerasCoincide()
57  * to determine if both frames of reference coincide, since even for Kinect cameras both can coincide if the images
58  * have been rectified.
59  *
60  * The 2D images and matrices are stored as common images, with an up->down rows order and left->right, as usual.
61  * Optionally, the intensity and confidence channels can be set to delayed-load images for off-rawlog storage so it saves
62  * memory by having loaded in memory just the needed images. See the methods load() and unload().
63  * Due to the intensive storage requirements of this kind of observations, this observation is the only one in MRPT
64  * for which it's recommended to always call "load()" and "unload()" before and after using the observation, *ONLY* when
65  * the observation was read from a rawlog dataset, in order to make sure that all the externally stored data fields are
66  * loaded and ready in memory.
67  *
68  * Classes that grab observations of this type are:
69  * - mrpt::hwdrivers::CSwissRanger3DCamera
70  * - mrpt::hwdrivers::CKinect
71  *
72  * There are two sets of calibration parameters (see mrpt::vision::checkerBoardStereoCalibration() or the ready-to-use GUI program <a href="http://www.mrpt.org/Application:kinect-calibrate" >kinect-calibrate</a>):
73  * - cameraParams: Projection parameters of the depth camera.
74  * - cameraParamsIntensity: Projection parameters of the intensity (gray-level or RGB) camera.
75  *
76  * In some cameras, like SwissRanger, both are the same. It is possible in Kinect to rectify the range images such both cameras
77  * seem to coincide and then both sets of camera parameters will be identical.
78  *
79  * Range data can be interpreted in two different ways depending on the 3D camera (this field is already set to the
80  * correct setting when grabbing observations from an mrpt::hwdrivers sensor):
81  * - range_is_depth=true -> Kinect-like ranges: entries of \a rangeImage are distances along the +X axis
82  * - range_is_depth=false -> Ranges in \a rangeImage are actual distances in 3D.
83  *
84  * The "intensity" channel may come from different channels in sesnsors as Kinect. Look at field \a intensityImageChannel to
85  * find out if the image was grabbed from the visible (RGB) or IR channels.
86  *
87  * 3D point clouds can be generated at any moment after grabbing with CObservation3DRangeScan::project3DPointsFromDepthImage() and CObservation3DRangeScan::project3DPointsFromDepthImageInto(), provided the correct
88  * calibration parameters.
89  *
90  * \note Starting at serialization version 2 (MRPT 0.9.1+), the confidence channel is stored as an image instead of a matrix to optimize memory and disk space.
91  * \note Starting at serialization version 3 (MRPT 0.9.1+), the 3D point cloud and the rangeImage can both be stored externally to save rawlog space.
92  * \note Starting at serialization version 5 (MRPT 0.9.5+), the new field \a range_is_depth
93  * \note Starting at serialization version 6 (MRPT 0.9.5+), the new field \a intensityImageChannel
94  *
95  * \sa mrpt::hwdrivers::CSwissRanger3DCamera, mrpt::hwdrivers::CKinect, CObservation
96  * \ingroup mrpt_obs_grp
97  */
99  {
100  // This must be added to any CSerializable derived class:
102 
103  protected:
104  bool m_points3D_external_stored; //!< If set to true, m_points3D_external_file is valid.
105  std::string m_points3D_external_file; //!< 3D points are in CImage::IMAGES_PATH_BASE+<this_file_name>
106 
107  bool m_rangeImage_external_stored; //!< If set to true, m_rangeImage_external_file is valid.
108  std::string m_rangeImage_external_file; //!< rangeImage is in CImage::IMAGES_PATH_BASE+<this_file_name>
109 
110  public:
111  CObservation3DRangeScan( ); //!< Default constructor
112  virtual ~CObservation3DRangeScan( ); //!< Destructor
113 
114  /** @name Delayed-load manual control methods.
115  @{ */
116  /** Makes sure all images and other fields which may be externally stored are loaded in memory.
117  * Note that for all CImages, calling load() is not required since the images will be automatically loaded upon first access, so load() shouldn't be needed to be called in normal cases by the user.
118  * If all the data were alredy loaded or this object has no externally stored data fields, calling this method has no effects.
119  * \sa unload
120  */
121  virtual void load() const;
122  /** Unload all images, for the case they being delayed-load images stored in external files (othewise, has no effect).
123  * \sa load
124  */
125  virtual void unload();
126  /** @} */
127 
128  /** Project the RGB+D images into a 3D point cloud (with color if the target map supports it) and optionally at a given 3D pose.
129  * The 3D point coordinates are computed from the depth image (\a rangeImage) and the depth camera camera parameters (\a cameraParams).
130  * There exist two set of formulas for projecting the i'th point, depending on the value of "range_is_depth".
131  * In all formulas below, "rangeImage" is the matrix of ranges and the pixel coordinates are (r,c).
132  *
133  * 1) [range_is_depth=true] With "range equals depth" or "Kinect-like depth mode": the range values
134  * are in fact distances along the "+X" axis, not real 3D ranges (this is the way Kinect reports ranges):
135  *
136  * \code
137  * x(i) = rangeImage(r,c)
138  * y(i) = (r_cx - c) * x(i) / r_fx
139  * z(i) = (r_cy - r) * x(i) / r_fy
140  * \endcode
141  *
142  *
143  * 2) [range_is_depth=false] With "normal ranges": range means distance in 3D. This must be set when
144  * processing data from the SwissRange 3D camera, among others.
145  *
146  * \code
147  * Ky = (r_cx - c)/r_fx
148  * Kz = (r_cy - r)/r_fy
149  *
150  * x(i) = rangeImage(r,c) / sqrt( 1 + Ky^2 + Kz^2 )
151  * y(i) = Ky * x(i)
152  * z(i) = Kz * x(i)
153  * \endcode
154  *
155  * The color of each point is determined by projecting the 3D local point into the RGB image using \a cameraParamsIntensity.
156  *
157  * By default the local coordinates of points are directly stored into the local map, but if indicated so in \a takeIntoAccountSensorPoseOnRobot
158  * the points are transformed with \a sensorPose. Furthermore, if provided, those coordinates are transformed with \a robotPoseInTheWorld
159  *
160  * \param[in] PROJ3D_USE_LUT (Only when range_is_depth=true) Whether to use a Look-up-table (LUT) to speed up the conversion. It's thread safe in all situations <b>except</b> when you call this method from different threads <b>and</b> with different camera parameter matrices. In all other cases, it's a good idea to left it enabled.
161  * \tparam POINTMAP Supported maps are all those covered by mrpt::utils::PointCloudAdapter (mrpt::maps::CPointsMap and derived, mrpt::opengl::CPointCloudColoured, PCL point clouds,...)
162  *
163  * \note In MRPT < 0.9.5, this method always assumes that ranges were in Kinect-like format.
164  */
165  template <class POINTMAP>
167  POINTMAP & dest_pointcloud,
168  const bool takeIntoAccountSensorPoseOnRobot,
169  const mrpt::poses::CPose3D *robotPoseInTheWorld=NULL,
170  const bool PROJ3D_USE_LUT=true)
171  {
172  detail::project3DPointsFromDepthImageInto<POINTMAP>(*this,dest_pointcloud,takeIntoAccountSensorPoseOnRobot,robotPoseInTheWorld,PROJ3D_USE_LUT);
173  }
174 
175  /** This method is equivalent to \c project3DPointsFromDepthImageInto() storing the projected 3D points (without color, in local coordinates) in this same class.
176  * For new code it's recommended to use instead \c project3DPointsFromDepthImageInto() which is much more versatile.
177  */
178  inline void project3DPointsFromDepthImage(const bool PROJ3D_USE_LUT=true) {
179  this->project3DPointsFromDepthImageInto(*this,false,NULL,PROJ3D_USE_LUT);
180  }
181 
182 
183  /** Convert this 3D observation into an "equivalent 2D fake laser scan", with a configurable vertical FOV.
184  *
185  * The result is a 2D laser scan with more "rays" (N) than columns has the 3D observation (W), exactly: N = W * oversampling_ratio.
186  * This oversampling is required since laser scans sample the space at evenly-separated angles, while
187  * a range camera follows a tangent-like distribution. By oversampling we make sure we don't leave "gaps" unseen by the virtual "2D laser".
188  *
189  * All obstacles within a frustum are considered and the minimum distance is kept in each direction.
190  * The horizontal FOV of the frustum is automatically computed from the intrinsic parameters, but the
191  * vertical FOV must be provided by the user, and can be set to be assymetric which may be useful
192  * depending on the zone of interest where to look for obstacles.
193  *
194  * All spatial transformations are riguorosly taken into account in this class, using the depth camera
195  * intrinsic calibration parameters.
196  *
197  * The timestamp of the new object is copied from the 3D object.
198  * Obviously, a requisite for calling this method is the 3D observation having range data,
199  * i.e. hasRangeImage must be true. It's not needed to have RGB data nor the raw 3D point clouds
200  * for this method to work.
201  *
202  * \param[out] out_scan2d The resulting 2D equivalent scan.
203  * \param[in] sensorLabel The sensor label that will have the newly created observation.
204  * \param[in] angle_sup (Default=5deg) The upper half-FOV angle (in radians)
205  * \param[in] angle_sup (Default=5deg) The lower half-FOV angle (in radians)
206  * \param[in] oversampling_ratio (Default=1.2=120%) How many more laser scans rays to create (read above).
207  *
208  * \sa The example in http://www.mrpt.org/Example_Kinect_To_2D_laser_scan
209  */
210  void convertTo2DScan(
212  const std::string & sensorLabel,
213  const double angle_sup = mrpt::utils::DEG2RAD(5),
214  const double angle_inf = mrpt::utils::DEG2RAD(5),
215  const double oversampling_ratio = 1.2 );
216 
217 
218  bool hasPoints3D; //!< true means the field points3D contains valid data.
219  std::vector<float> points3D_x; //!< If hasPoints3D=true, the X coordinates of the 3D point cloud detected by the camera. \sa resizePoints3DVectors
220  std::vector<float> points3D_y; //!< If hasPoints3D=true, the Y coordinates of the 3D point cloud detected by the camera. \sa resizePoints3DVectors
221  std::vector<float> points3D_z; //!< If hasPoints3D=true, the Z coordinates of the 3D point cloud detected by the camera. \sa resizePoints3DVectors
222 
223  /** Use this method instead of resizing all three \a points3D_x, \a points3D_y & \a points3D_z to allow the usage of the internal memory pool. */
224  void resizePoints3DVectors(const size_t nPoints);
225 
226  // 3D points external storage functions ---------
227  inline bool points3D_isExternallyStored() const { return m_points3D_external_stored; }
228  inline std::string points3D_getExternalStorageFile() const { return m_points3D_external_file; }
229  void points3D_getExternalStorageFileAbsolutePath(std::string &out_path) const;
230  inline std::string points3D_getExternalStorageFileAbsolutePath() const {
231  std::string tmp;
232  points3D_getExternalStorageFileAbsolutePath(tmp);
233  return tmp;
234  }
235  void points3D_convertToExternalStorage( const std::string &fileName, const std::string &use_this_base_dir ); //!< Users won't normally want to call this, it's only used from internal MRPT programs.
236  // ---------
237 
238  bool hasRangeImage; //!< true means the field rangeImage contains valid data
239  mrpt::math::CMatrix rangeImage; //!< If hasRangeImage=true, a matrix of floats with the range data as captured by the camera (in meters) \sa range_is_depth
240  bool range_is_depth; //!< true: Kinect-like ranges: entries of \a rangeImage are distances along the +X axis; false: Ranges in \a rangeImage are actual distances in 3D.
241 
242  void rangeImage_setSize(const int HEIGHT, const int WIDTH); //!< Similar to calling "rangeImage.setSize(H,W)" but this method provides memory pooling to speed-up the memory allocation.
243 
244  // Range Matrix external storage functions ---------
245  inline bool rangeImage_isExternallyStored() const { return m_rangeImage_external_stored; }
246  inline std::string rangeImage_getExternalStorageFile() const { return m_rangeImage_external_file; }
247  void rangeImage_getExternalStorageFileAbsolutePath(std::string &out_path) const;
249  std::string tmp;
250  rangeImage_getExternalStorageFileAbsolutePath(tmp);
251  return tmp;
252  }
253  void rangeImage_convertToExternalStorage( const std::string &fileName, const std::string &use_this_base_dir ); //!< Users won't normally want to call this, it's only used from internal MRPT programs.
254  /** Forces marking this observation as non-externally stored - it doesn't anything else apart from reseting the corresponding flag (Users won't normally want to call this, it's only used from internal MRPT programs) */
255  void rangeImage_forceResetExternalStorage() { m_rangeImage_external_stored=false; }
256  // ---------
257 
258  /** Enum type for intensityImageChannel */
260  {
261  CH_VISIBLE = 0, //!< Grayscale or RGB visible channel of the camera sensor.
262  CH_IR = 1 //!< Infrarred (IR) channel
263  };
264 
265  bool hasIntensityImage; //!< true means the field intensityImage contains valid data
266  mrpt::utils::CImage intensityImage; //!< If hasIntensityImage=true, a color or gray-level intensity image of the same size than "rangeImage"
267  TIntensityChannelID intensityImageChannel; //!< The source of the intensityImage; typically the visible channel \sa TIntensityChannelID
268 
269  bool hasConfidenceImage; //!< true means the field confidenceImage contains valid data
270  mrpt::utils::CImage confidenceImage; //!< If hasConfidenceImage=true, an image with the "confidence" value [range 0-255] as estimated by the capture drivers.
271 
272  mrpt::utils::TCamera cameraParams; //!< Projection parameters of the depth camera.
273  mrpt::utils::TCamera cameraParamsIntensity; //!< Projection parameters of the intensity (graylevel or RGB) camera.
274 
275  /** Relative pose of the intensity camera wrt the depth camera (which is the coordinates origin for this observation).
276  * In a SwissRanger camera, this will be (0,0,0,-90deg,0,-90deg) since both cameras coincide.
277  * In a Kinect, this will include a small lateral displacement and a rotation, according to the drawing on the top of this page.
278  * \sa doDepthAndIntensityCamerasCoincide
279  */
281 
282  /** Return true if \a relativePoseIntensityWRTDepth equals the pure rotation (0,0,0,-90deg,0,-90deg) (with a small comparison epsilon)
283  * \sa relativePoseIntensityWRTDepth
284  */
285  bool doDepthAndIntensityCamerasCoincide() const;
286 
287 
288  float maxRange; //!< The maximum range allowed by the device, in meters (e.g. 8.0m, 5.0m,...)
289  mrpt::poses::CPose3D sensorPose; //!< The 6D pose of the sensor on the robot.
290  float stdError; //!< The "sigma" error of the device in meters, used while inserting the scan in an occupancy grid.
291 
292  // See base class docs
293  void getSensorPose( mrpt::poses::CPose3D &out_sensorPose ) const { out_sensorPose = sensorPose; }
294  // See base class docs
295  void setSensorPose( const mrpt::poses::CPose3D &newSensorPose ) { sensorPose = newSensorPose; }
296  // See base class docs
297  virtual void getDescriptionAsText(std::ostream &o) const;
298 
299  void swap(CObservation3DRangeScan &o); //!< Very efficient method to swap the contents of two observations.
300 
301  void getZoneAsObs( CObservation3DRangeScan &obs, const unsigned int &r1, const unsigned int &r2, const unsigned int &c1, const unsigned int &c2 );
302 
303  /** A Levenberg-Marquart-based optimizer to recover the calibration parameters of a 3D camera given a range (depth) image and the corresponding 3D point cloud.
304  * \param camera_offset The offset (in meters) in the +X direction of the point cloud. It's 1cm for SwissRanger SR4000.
305  * \return The final average reprojection error per pixel (typ <0.05 px)
306  */
307  static double recoverCameraCalibrationParameters(
308  const CObservation3DRangeScan &in_obs,
309  mrpt::utils::TCamera &out_camParams,
310  const double camera_offset = 0.01 );
311 
312  /** Look-up-table struct for project3DPointsFromDepthImageInto() */
314  {
317  };
318  static TCached3DProjTables m_3dproj_lut; //!< 3D point cloud projection look-up-table \sa project3DPointsFromDepthImage
319 
320  }; // End of class def.
322 
323 
324  } // End of namespace
325 
326  namespace utils
327  {
328  // Specialization must occur in the same namespace
329  MRPT_DECLARE_TTYPENAME_PTR_NAMESPACE(CObservation3DRangeScan, mrpt::obs)
330 
331  // Enum <-> string converter:
332  template <>
334  {
336  static void fill(bimap<enum_t,std::string> &m_map)
337  {
340  }
341  };
342  }
343 
344  namespace utils
345  {
346  /** Specialization mrpt::utils::PointCloudAdapter<CObservation3DRangeScan> \ingroup mrpt_adapters_grp */
347  template <>
348  class PointCloudAdapter< mrpt::obs::CObservation3DRangeScan> : public detail::PointCloudAdapterHelperNoRGB<mrpt::obs::CObservation3DRangeScan,float>
349  {
350  private:
352  public:
353  typedef float coords_t; //!< The type of each point XYZ coordinates
354  static const int HAS_RGB = 0; //!< Has any color RGB info?
355  static const int HAS_RGBf = 0; //!< Has native RGB info (as floats)?
356  static const int HAS_RGBu8 = 0; //!< Has native RGB info (as uint8_t)?
357 
358  /** Constructor (accept a const ref for convenience) */
359  inline PointCloudAdapter(const mrpt::obs::CObservation3DRangeScan &obj) : m_obj(*const_cast<mrpt::obs::CObservation3DRangeScan*>(&obj)) { }
360  /** Get number of points */
361  inline size_t size() const { return m_obj.points3D_x.size(); }
362  /** Set number of points (to uninitialized values) */
363  inline void resize(const size_t N) {
364  if (N) m_obj.hasPoints3D = true;
365  m_obj.resizePoints3DVectors(N);
366  }
367 
368  /** Get XYZ coordinates of i'th point */
369  template <typename T>
370  inline void getPointXYZ(const size_t idx, T &x,T &y, T &z) const {
371  x=m_obj.points3D_x[idx];
372  y=m_obj.points3D_y[idx];
373  z=m_obj.points3D_z[idx];
374  }
375  /** Set XYZ coordinates of i'th point */
376  inline void setPointXYZ(const size_t idx, const coords_t x,const coords_t y, const coords_t z) {
377  m_obj.points3D_x[idx]=x;
378  m_obj.points3D_y[idx]=y;
379  m_obj.points3D_z[idx]=z;
380  }
381  }; // end of PointCloudAdapter<CObservation3DRangeScan>
382  }
383 } // End of namespace
384 
386 
387 #endif
void getSensorPose(mrpt::poses::CPose3D &out_sensorPose) const
A general method to retrieve the sensor pose on the robot.
double DEG2RAD(const double x)
Degrees to radians.
void resize(const size_t N)
Set number of points (to uninitialized values)
void setPointXYZ(const size_t idx, const coords_t x, const coords_t y, const coords_t z)
Set XYZ coordinates of i'th point.
mrpt::utils::TCamera cameraParamsIntensity
Projection parameters of the intensity (graylevel or RGB) camera.
A helper base class for those PointCloudAdapter<> which do not handle RGB data; it declares needed in...
Definition: adapters.h:48
A class for storing images as grayscale or RGB bitmaps.
Definition: CImage.h:98
void project3DPointsFromDepthImage(const bool PROJ3D_USE_LUT=true)
This method is equivalent to project3DPointsFromDepthImageInto() storing the projected 3D points (wit...
void project3DPointsFromDepthImageInto(CObservation3DRangeScan &src_obs, POINTMAP &dest_pointcloud, const bool takeIntoAccountSensorPoseOnRobot, const mrpt::poses::CPose3D *robotPoseInTheWorld, const bool PROJ3D_USE_LUT)
Declares a class derived from "CObservation" that encapsules a 3D range scan measurement (e...
Look-up-table struct for project3DPointsFromDepthImageInto()
Column vector, like Eigen::MatrixX*, but automatically initialized to zeros since construction...
Definition: eigen_frwds.h:51
void resizePoints3DVectors(const size_t nPoints)
Use this method instead of resizing all three points3D_x, points3D_y & points3D_z to allow the usage ...
PointCloudAdapter(const mrpt::obs::CObservation3DRangeScan &obj)
Constructor (accept a const ref for convenience)
STL namespace.
Only specializations of this class are defined for each enum type of interest.
Definition: TEnumType.h:23
std::string rangeImage_getExternalStorageFileAbsolutePath() const
std::vector< float > points3D_z
If hasPoints3D=true, the Z coordinates of the 3D point cloud detected by the camera.
mrpt::math::CMatrix rangeImage
If hasRangeImage=true, a matrix of floats with the range data as captured by the camera (in meters) ...
mrpt::utils::TCamera cameraParams
Projection parameters of the depth camera.
#define DEFINE_SERIALIZABLE_PRE_CUSTOM_BASE_LINKAGE(class_name, base_name, _LINKAGE_)
This declaration must be inserted in all CSerializable classes definition, before the class declarati...
mrpt::poses::CPose3D relativePoseIntensityWRTDepth
Relative pose of the intensity camera wrt the depth camera (which is the coordinates origin for this ...
std::vector< float > points3D_y
If hasPoints3D=true, the Y coordinates of the 3D point cloud detected by the camera.
A bidirectional version of std::map, declared as bimap and which actually contains two std...
Definition: bimap.h:27
This namespace contains algorithms for SLAM, localization, map building, representation of robot's ac...
bool hasRangeImage
true means the field rangeImage contains valid data
Grayscale or RGB visible channel of the camera sensor.
TIntensityChannelID intensityImageChannel
The source of the intensityImage; typically the visible channel.
void rangeImage_forceResetExternalStorage()
Forces marking this observation as non-externally stored - it doesn't anything else apart from reseti...
bool hasPoints3D
true means the field points3D contains valid data.
float stdError
The "sigma" error of the device in meters, used while inserting the scan in an occupancy grid...
#define MRPT_DECLARE_TTYPENAME_PTR_NAMESPACE(_TYPE, __NS)
Definition: TTypeName.h:72
TIntensityChannelID
Enum type for intensityImageChannel.
void getPointXYZ(const size_t idx, T &x, T &y, T &z) const
Get XYZ coordinates of i'th point.
This is the global namespace for all Mobile Robot Programming Toolkit (MRPT) libraries.
#define DEFINE_SERIALIZABLE(class_name)
This declaration must be inserted in all CSerializable classes definition, within the class declarati...
A "CObservation"-derived class that represents a 2D range scan measurement (typically from a laser sc...
mrpt::poses::CPose3D sensorPose
The 6D pose of the sensor on the robot.
A class used to store a 3D pose (a 3D translation + a rotation in 3D).
Definition: CPose3D.h:69
Declares a class that represents any robot's observation.
bool hasIntensityImage
true means the field intensityImage contains valid data
std::vector< float > points3D_x
If hasPoints3D=true, the X coordinates of the 3D point cloud detected by the camera.
mrpt::utils::CImage confidenceImage
If hasConfidenceImage=true, an image with the "confidence" value [range 0-255] as estimated by the ca...
void setSensorPose(const mrpt::poses::CPose3D &newSensorPose)
A general method to change the sensor pose on the robot.
An adapter to different kinds of point cloud object.
Definition: adapters.h:38
void insert(const KEY &k, const VALUE &v)
Insert a new pair KEY<->VALUE in the bi-map.
Definition: bimap.h:68
mrpt::utils::CImage intensityImage
If hasIntensityImage=true, a color or gray-level intensity image of the same size than "rangeImage"...
This class is a "CSerializable" wrapper for "CMatrixFloat".
Definition: CMatrix.h:30
bool range_is_depth
true: Kinect-like ranges: entries of rangeImage are distances along the +X axis; false: Ranges in ran...
#define DEFINE_SERIALIZABLE_POST_CUSTOM_BASE_LINKAGE(class_name, base_name, _LINKAGE_)
static TCached3DProjTables m_3dproj_lut
3D point cloud projection look-up-table
Structure to hold the parameters of a pinhole camera model.
Definition: TCamera.h:31
float maxRange
The maximum range allowed by the device, in meters (e.g. 8.0m, 5.0m,...)
bool hasConfidenceImage
true means the field confidenceImage contains valid data
std::string points3D_getExternalStorageFileAbsolutePath() const



Page generated by Doxygen 1.8.9.1 for MRPT 1.3.0 SVN: at Sun Sep 13 03:55:12 UTC 2015