mirror of https://github.com/CGAL/cgal
Merge pull request #3535 from sgiraudot/Classification-Neural_network_and_parallelized_random_forest-GF
[Small Feature] Classification: Neural Network and parallelized Random Forest
This commit is contained in:
commit
1a45e1c753
|
|
@ -87,7 +87,10 @@ The following code snippet shows how to instantiate such data structures from an
|
|||
- [Distance_to_plane](@ref CGAL::Classification::Feature::Distance_to_plane) measures how far away a point is from a locally estimated plane;
|
||||
- [Eigenvalue](@ref CGAL::Classification::Feature::Eigenvalue) measures one of the three local eigenvalues;
|
||||
- [Elevation](@ref CGAL::Classification::Feature::Elevation) computes the local distance to an estimation of the ground;
|
||||
- [Height_above](@ref CGAL::Classification::Feature::Elevation) computes the distance between the local highest point and the point;
|
||||
- [Height_below](@ref CGAL::Classification::Feature::Elevation) computes the distance between the point and the local lowest point;
|
||||
- [Vertical_dispersion](@ref CGAL::Classification::Feature::Vertical_dispersion) computes how noisy the point set is on a local Z-cylinder;
|
||||
- [Vertical_range](@ref CGAL::Classification::Feature::Elevation) computes the distance between the local highest and lowest points;
|
||||
- [Verticality](@ref CGAL::Classification::Feature::Verticality) compares the local normal vector to the vertical vector.
|
||||
|
||||
These features are designed for point sets but can easily be used with surface meshes as well (see \ref Classification_meshes). For more details about how these different features can help to identify one label or the other, please refer to their associated reference manual pages.
|
||||
|
|
@ -108,6 +111,8 @@ Multiple scales that are sequentially larger can be used to increase the quality
|
|||
|
||||
Note that using this class in order to generate features is not mandatory, as features and data structures can all be handled by hand. It is mainly provided to make the specific case of point sets simpler to handle. Users can still add their own features within their feature set.
|
||||
|
||||
Some data structure instantiated by the generator will be used by feature: for this reason, the generator should be instantiated _within the same scope_ as the feature set and should not be deleted before the feature set.
|
||||
|
||||
The following snippet shows how to use the point set feature generator:
|
||||
|
||||
\snippet Classification/example_generation_and_training.cpp Generator
|
||||
|
|
@ -196,13 +201,17 @@ Example of cluster classification mesh (left: input, middle: clusters computed f
|
|||
|
||||
%Classification relies on a classifier: this classifier is an object that, from the set of values taken by the features at an input item, computes the probability that an input item belongs to one label or another. A model of the concept `CGAL::Classification::Classifier` must take the index of an input item and store the probability associated to each label in a vector. If a classifier returns the value 1 for a pair of label and input item, it means that this item belongs to this label with certainty; values close to 0 mean that this item is not likely to belong to this label.
|
||||
|
||||
\cgal provides three models for this concept, [ETHZ_random_forest_classifier](@ref CGAL::Classification::ETHZ_random_forest_classifier), [OpenCV_random_forest_classifier](@ref CGAL::Classification::OpenCV_random_forest_classifier) and [Sum_of_weighted_features_classifier](@ref CGAL::Classification::Sum_of_weighted_features_classifier).
|
||||
\cgal provides four models for this concept, [ETHZ::Random_forest_classifier](@ref CGAL::Classification::ETHZ::Random_forest_classifier), [OpenCV::Random_forest_classifier](@ref CGAL::Classification::OpenCV::Random_forest_classifier), [TensorFlow::Neural_network_classifier](@ref CGAL::Classification::TensorFlow::Neural_network_classifier) and [Sum_of_weighted_features_classifier](@ref CGAL::Classification::Sum_of_weighted_features_classifier).
|
||||
|
||||
To perform classification based on these classifiers, please refer to \ref Classification_classification_functions.
|
||||
\note Currently, [ETHZ::Random_forest_classifier](@ref CGAL::Classification::ETHZ::Random_forest_classifier)
|
||||
is the best classifier available in \cgal and we strongly advise users
|
||||
to use it.
|
||||
|
||||
To perform classification based on four classifiers, please refer to \ref Classification_classification_functions.
|
||||
|
||||
\subsection Classification_ETHZ_random_forest ETHZ Random Forest
|
||||
|
||||
\cgal provides [ETHZ_random_forest_classifier](@ref CGAL::Classification::ETHZ_random_forest_classifier),
|
||||
\cgal provides [ETHZ::Random_forest_classifier](@ref CGAL::Classification::ETHZ::Random_forest_classifier),
|
||||
a classifier based on the Random Forest Template Library developed by
|
||||
Stefan Walk at ETH Zurich \cgalCite{cgal:w-erftl-14} (the library is
|
||||
distributed under the MIT license and is included with the \cgal release,
|
||||
|
|
@ -210,9 +219,6 @@ the user does not have to install anything more). This classifier uses
|
|||
a ground truth training set to construct several decision trees that
|
||||
are then used to assign a label to each input item.
|
||||
|
||||
__This classifier is currently the best available in \cgal and we
|
||||
strongly advise users to use it.__
|
||||
|
||||
This classifier cannot be set up by hand and requires a ground truth
|
||||
training set. The training algorithm is fast but usually requires a
|
||||
high number of inliers. The training algorithm uses more memory at
|
||||
|
|
@ -227,14 +233,13 @@ to README provided in the [ETH Zurich's code archive](https://www.ethz.ch/conten
|
|||
|
||||
\subsection Classification_OpenCV_random_forest OpenCV Random Forest
|
||||
|
||||
The second classifier is [OpenCV_random_forest_classifier](@ref CGAL::Classification::OpenCV_random_forest_classifier).
|
||||
The second classifier is [OpenCV::Random_forest_classifier](@ref CGAL::Classification::OpenCV::Random_forest_classifier).
|
||||
It uses the \ref thirdpartyOpenCV library, more specifically the
|
||||
[Random Trees](http://docs.opencv.org/2.4/modules/ml/doc/random_trees.html)
|
||||
package.
|
||||
|
||||
Note that this classifier usually produces results with a lower
|
||||
quality than [ETHZ_random_forest_classifier](@ref CGAL::Classification::ETHZ_random_forest_classifier).
|
||||
|
||||
quality than [ETHZ::Random_forest_classifier](@ref CGAL::Classification::ETHZ::Random_forest_classifier).
|
||||
It is provided for the sake of completeness and for testing purposes,
|
||||
but if you are not sure what to use, we advise using the ETHZ Random
|
||||
Forest instead.
|
||||
|
|
@ -244,6 +249,31 @@ use this classifier. For more details about the algorithm, please refer
|
|||
to [the official documentation](http://docs.opencv.org/2.4/modules/ml/doc/random_trees.html)
|
||||
of OpenCV.
|
||||
|
||||
\subsection Classification_TensorFlow_neural_network TensorFlow Neural Network
|
||||
|
||||
\cgal provides [TensorFlow::Neural_network_classifier](@ref CGAL::Classification::TensorFlow::Neural_network_classifier).
|
||||
|
||||
It uses the C++ API of the \ref thirdpartyTensorFlow library.
|
||||
|
||||
\warning This feature is still experimental: it may not be stable
|
||||
and is likely to undergo substantial changes in future releases of
|
||||
\cgal. The API changes will be announced in the release notes.
|
||||
|
||||
The provided interface is a feature-based neural network: a set of
|
||||
features is used as an input layer followed by a user-specified number
|
||||
of hidden layers with a user-specified activation function. The output
|
||||
layer is a softmax layer providing, for each label, the probability
|
||||
that an input item belongs to it.
|
||||
|
||||
This classifier cannot be set up by hand and requires a ground truth
|
||||
training set. The training algorithm usually requires a higher number
|
||||
of inliers than random forest. The quality of the results, so far, is
|
||||
comparable to random forest.
|
||||
|
||||
An [example](\ref Classification_example_tensorflow_neural_network) shows how to
|
||||
use this classifier. For more details about the algorithm, please refer
|
||||
to [the TensorFlow tutorials](https://www.tensorflow.org/tutorials/).
|
||||
|
||||
\subsection Classification_sowf Sum of Weighted Features
|
||||
|
||||
This latest classifier defines the following attributes:
|
||||
|
|
@ -334,7 +364,7 @@ Top-Left: input point set. Top-Right: raw output classification represented by a
|
|||
|
||||
Mathematical details are provided hereafter.
|
||||
|
||||
\subsection Classification_classify Raw classification
|
||||
\subsection Classification_classify Raw Classification
|
||||
|
||||
- `CGAL::Classification::classify()`: this is the fastest method
|
||||
that provides acceptable but usually noisy results (see Figure
|
||||
|
|
@ -478,23 +508,31 @@ The following example:
|
|||
|
||||
\subsection Classification_example_ethz_random_forest ETHZ Random Forest
|
||||
|
||||
The following example shows how to use the classifier [ETHZ_random_forest_classifier](@ref CGAL::Classification::ETHZ_random_forest_classifier) using an input training set.
|
||||
The following example shows how to use the classifier [ETHZ::Random_forest_classifier](@ref CGAL::Classification::ETHZ::Random_forest_classifier) using an input training set.
|
||||
|
||||
\cgalExample{Classification/example_ethz_random_forest.cpp}
|
||||
|
||||
\subsection Classification_example_opencv_random_forest OpenCV Random Forest
|
||||
|
||||
The following example shows how to use the classifier [OpenCV_random_forest_classifier](@ref CGAL::Classification::OpenCV_random_forest_classifier) using an input training set.
|
||||
The following example shows how to use the classifier [OpenCV::Random_forest_classifier](@ref CGAL::Classification::OpenCV::Random_forest_classifier) using an input training set.
|
||||
|
||||
\cgalExample{Classification/example_opencv_random_forest.cpp}
|
||||
|
||||
\subsection Classification_example_tensorflow_neural_network TensorFlow Neural Network
|
||||
|
||||
The following example shows how to use the classifier
|
||||
[TensorFlow::Neural_network_classifier](@ref CGAL::Classification::TensorFlow::Neural_network_classifier)
|
||||
using an input training set.
|
||||
|
||||
\cgalExample{Classification/example_tensorflow_neural_network.cpp}
|
||||
|
||||
\subsection Classification_example_mesh Mesh Classification
|
||||
|
||||
The following example:
|
||||
|
||||
- reads a mesh in OFF format;
|
||||
- automatically generates features on 5 scales;
|
||||
- loads a configuration file for classifier [ETHZ_random_forest_classifier](@ref CGAL::Classification::ETHZ_random_forest_classifier);
|
||||
- loads a configuration file for classifier [ETHZ::Random_forest_classifier](@ref CGAL::Classification::ETHZ::Random_forest_classifier);
|
||||
- runs the algorithm using the graphcut regularization.
|
||||
|
||||
\cgalExample{Classification/example_mesh_classification.cpp}
|
||||
|
|
@ -509,7 +547,7 @@ The following example:
|
|||
- detects plane using the algorithm `CGAL::Shape_detection_3::Region_growing`;
|
||||
- creates [Cluster](@ref CGAL::Classification::Cluster) objects from these detected planes;
|
||||
- computes cluster features from the pointwise features;
|
||||
- loads a configuration file for classifier [ETHZ_random_forest_classifier](@ref CGAL::Classification::ETHZ_random_forest_classifier);
|
||||
- loads a configuration file for classifier [ETHZ::Random_forest_classifier](@ref CGAL::Classification::ETHZ::Random_forest_classifier);
|
||||
- runs the algorithm using the raw algorithm.
|
||||
|
||||
\cgalExample{Classification/example_cluster_classification.cpp}
|
||||
|
|
@ -517,7 +555,7 @@ The following example:
|
|||
|
||||
\section Classification_history History
|
||||
|
||||
This package is based on a research code by [Florent Lafarge](https://www-sop.inria.fr/members/Florent.Lafarge/) that was generalized, extended and packaged by [Simon Giraudot](http://geometryfactory.com/who-we-are/) in \cgal 4.12. %Classification of surface meshes and of clusters were introduced in \cgal 4.13.
|
||||
This package is based on a research code by [Florent Lafarge](https://www-sop.inria.fr/members/Florent.Lafarge/) that was generalized, extended and packaged by [Simon Giraudot](http://geometryfactory.com/who-we-are/) in \cgal 4.12. %Classification of surface meshes and of clusters were introduced in \cgal 4.13. The Neural Network classifier was introduced in \cgal 4.14.
|
||||
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -13,8 +13,9 @@ Concept describing a classifier used by classification functions (see
|
|||
`CGAL::Classification::classify_with_graphcut()`).
|
||||
|
||||
\cgalHasModel `CGAL::Classification::Sum_of_weighted_features_classifier`
|
||||
\cgalHasModel `CGAL::Classification::ETHZ_random_forest_classifier`
|
||||
\cgalHasModel `CGAL::Classification::OpenCV_random_forest_classifier`
|
||||
\cgalHasModel `CGAL::Classification::ETHZ::Random_forest_classifier`
|
||||
\cgalHasModel `CGAL::Classification::OpenCV::Random_forest_classifier`
|
||||
\cgalHasModel `CGAL::Classification::TensorFlow::Neural_network_classifier`
|
||||
|
||||
*/
|
||||
class Classifier
|
||||
|
|
|
|||
|
|
@ -14,6 +14,21 @@ Functions that perform classification based on a set of labels and a classifier,
|
|||
|
||||
Classifiers are functors that, given a label set and an input item, associate this input item with an energy for each label. This energy measures the likelihood of the item to belong to this label.
|
||||
|
||||
\defgroup PkgClassificationClassifiersETHZ ETHZ
|
||||
\ingroup PkgClassificationClassifiers
|
||||
|
||||
Classifiers that use the ETHZ library.
|
||||
|
||||
\defgroup PkgClassificationClassifiersOpenCV OpenCV
|
||||
\ingroup PkgClassificationClassifiers
|
||||
|
||||
Classifiers that use the \ref thirdpartyOpenCV library.
|
||||
|
||||
\defgroup PkgClassificationClassifiersTensorFlow TensorFlow
|
||||
\ingroup PkgClassificationClassifiers
|
||||
|
||||
Classifiers that use the \ref thirdpartyTensorFlow library.
|
||||
|
||||
\defgroup PkgClassificationDataStructures Common Data Structures
|
||||
\ingroup PkgClassificationRef
|
||||
|
||||
|
|
@ -86,9 +101,10 @@ Data structures specialized to classify clusters.
|
|||
|
||||
## Classifiers ##
|
||||
|
||||
- `CGAL::Classification::ETHZ::Random_forest_classifier`
|
||||
- `CGAL::Classification::OpenCV::Random_forest_classifier`
|
||||
- `CGAL::Classification::TensorFlow::Neural_network_classifier<ActivationFunction>`
|
||||
- `CGAL::Classification::Sum_of_weighted_features_classifier`
|
||||
- `CGAL::Classification::ETHZ_random_forest_classifier`
|
||||
- `CGAL::Classification::OpenCV_random_forest_classifier`
|
||||
|
||||
## Common Data Structures ##
|
||||
|
||||
|
|
@ -115,8 +131,11 @@ Data structures specialized to classify clusters.
|
|||
- `CGAL::Classification::Feature::Echo_scatter<GeomTraits, PointRange, PointMap, EchoMap>`
|
||||
- `CGAL::Classification::Feature::Eigenvalue`
|
||||
- `CGAL::Classification::Feature::Elevation<GeomTraits, PointRange, PointMap>`
|
||||
- `CGAL::Classification::Feature::Height_above<GeomTraits, PointRange, PointMap>`
|
||||
- `CGAL::Classification::Feature::Height_below<GeomTraits, PointRange, PointMap>`
|
||||
- `CGAL::Classification::Feature::Simple_feature<InputRange, PropertyMap>`
|
||||
- `CGAL::Classification::Feature::Vertical_dispersion<GeomTraits, PointRange, PointMap>`
|
||||
- `CGAL::Classification::Feature::Vertical_range<GeomTraits, PointRange, PointMap>`
|
||||
- `CGAL::Classification::Feature::Verticality<GeomTraits>`
|
||||
|
||||
## Point Set Classification ##
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@
|
|||
\example Classification/example_generation_and_training.cpp
|
||||
\example Classification/example_ethz_random_forest.cpp
|
||||
\example Classification/example_opencv_random_forest.cpp
|
||||
\example Classification/example_tensorflow_neural_network.cpp
|
||||
\example Classification/example_mesh_classification.cpp
|
||||
\example Classification/example_cluster_classification.cpp
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -107,6 +107,21 @@ else()
|
|||
message(STATUS "NOTICE: OpenCV was not found. OpenCV random forest predicate for classification won't be available.")
|
||||
endif()
|
||||
|
||||
find_package(TensorFlow QUIET)
|
||||
if (TensorFlow_FOUND)
|
||||
message(STATUS "Found TensorFlow")
|
||||
include_directories( ${TensorFlow_INCLUDE_DIR} )
|
||||
set(classification_linked_libraries ${classification_linked_libraries}
|
||||
${TensorFlow_LIBRARY})
|
||||
set(classification_compile_definitions ${classification_compile_definitions}
|
||||
"-DCGAL_LINKED_WITH_TENSORFLOW")
|
||||
|
||||
set(targets ${targets} example_tensorflow_neural_network)
|
||||
else()
|
||||
message(STATUS "NOTICE: TensorFlow not found, Neural Network predicate for classification won't be available.")
|
||||
endif()
|
||||
|
||||
|
||||
# Creating targets with correct libraries and flags
|
||||
foreach(target ${targets})
|
||||
create_single_source_cgal_program( "${target}.cpp" CXX_FEATURES ${needed_cxx_features} )
|
||||
|
|
|
|||
|
|
@ -146,7 +146,7 @@ int main (int argc, char** argv)
|
|||
|
||||
///////////////////////////////////////////////////////////////////
|
||||
//! [Classify]
|
||||
std::vector<std::size_t> label_indices;
|
||||
std::vector<int> label_indices (pts.size(), -1);
|
||||
|
||||
CGAL::Real_timer t;
|
||||
t.start();
|
||||
|
|
@ -200,7 +200,7 @@ int main (int argc, char** argv)
|
|||
{
|
||||
f << pts[i] << " ";
|
||||
|
||||
Label_handle label = labels[label_indices[i]];
|
||||
Label_handle label = labels[std::size_t(label_indices[i])];
|
||||
if (label == ground)
|
||||
f << "245 180 0" << std::endl;
|
||||
else if (label == vegetation)
|
||||
|
|
|
|||
|
|
@ -191,7 +191,7 @@ int main (int argc, char** argv)
|
|||
std::vector<int> label_indices(clusters.size(), -1);
|
||||
|
||||
std::cerr << "Using ETHZ Random Forest Classifier" << std::endl;
|
||||
Classification::ETHZ_random_forest_classifier classifier (labels, features);
|
||||
Classification::ETHZ::Random_forest_classifier classifier (labels, features);
|
||||
|
||||
std::cerr << "Loading configuration" << std::endl;
|
||||
std::ifstream in_config (filename_config, std::ios_base::in | std::ios_base::binary);
|
||||
|
|
|
|||
|
|
@ -91,7 +91,7 @@ int main (int argc, char** argv)
|
|||
std::vector<int> label_indices(pts.size(), -1);
|
||||
|
||||
std::cerr << "Using ETHZ Random Forest Classifier" << std::endl;
|
||||
Classification::ETHZ_random_forest_classifier classifier (labels, features);
|
||||
Classification::ETHZ::Random_forest_classifier classifier (labels, features);
|
||||
|
||||
std::cerr << "Training" << std::endl;
|
||||
t.reset();
|
||||
|
|
|
|||
|
|
@ -114,7 +114,7 @@ int main (int argc, char** argv)
|
|||
classifier.set_effect (b, my_feature, Classifier::PENALIZING);
|
||||
|
||||
std::cerr << "Classifying" << std::endl;
|
||||
std::vector<std::size_t> label_indices(pts.size(), -1);
|
||||
std::vector<int> label_indices(pts.size(), -1);
|
||||
Classification::classify_with_graphcut<CGAL::Sequential_tag>
|
||||
(pts, Pmap(), labels, classifier,
|
||||
neighborhood.k_neighbor_query(12),
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ int main (int argc, char** argv)
|
|||
std::vector<int> label_indices(mesh.number_of_faces(), -1);
|
||||
|
||||
std::cerr << "Using ETHZ Random Forest Classifier" << std::endl;
|
||||
Classification::ETHZ_random_forest_classifier classifier (labels, features);
|
||||
Classification::ETHZ::Random_forest_classifier classifier (labels, features);
|
||||
|
||||
std::cerr << "Loading configuration" << std::endl;
|
||||
std::ifstream in_config (filename_config, std::ios_base::in | std::ios_base::binary);
|
||||
|
|
|
|||
|
|
@ -90,7 +90,7 @@ int main (int argc, char** argv)
|
|||
std::vector<int> label_indices(pts.size(), -1);
|
||||
|
||||
std::cerr << "Using OpenCV Random Forest Classifier" << std::endl;
|
||||
Classification::OpenCV_random_forest_classifier classifier (labels, features);
|
||||
Classification::OpenCV::Random_forest_classifier classifier (labels, features);
|
||||
|
||||
std::cerr << "Training" << std::endl;
|
||||
t.reset();
|
||||
|
|
|
|||
|
|
@ -0,0 +1,164 @@
|
|||
#if defined (_MSC_VER) && !defined (_WIN64)
|
||||
#pragma warning(disable:4244) // boost::number_distance::distance()
|
||||
// converts 64 to 32 bits integers
|
||||
#endif
|
||||
|
||||
#include <cstdlib>
|
||||
#include <fstream>
|
||||
#include <iostream>
|
||||
#include <string>
|
||||
|
||||
#include <CGAL/Simple_cartesian.h>
|
||||
#include <CGAL/Classification.h>
|
||||
#include <CGAL/Point_set_3.h>
|
||||
#include <CGAL/Point_set_3/IO.h>
|
||||
|
||||
#include <CGAL/Real_timer.h>
|
||||
|
||||
typedef CGAL::Simple_cartesian<double> Kernel;
|
||||
typedef Kernel::Point_3 Point;
|
||||
typedef CGAL::Point_set_3<Point> Point_set;
|
||||
typedef Kernel::Iso_cuboid_3 Iso_cuboid_3;
|
||||
|
||||
typedef Point_set::Point_map Pmap;
|
||||
typedef Point_set::Property_map<int> Imap;
|
||||
typedef Point_set::Property_map<unsigned char> UCmap;
|
||||
|
||||
namespace Classification = CGAL::Classification;
|
||||
|
||||
typedef Classification::Label_handle Label_handle;
|
||||
typedef Classification::Feature_handle Feature_handle;
|
||||
typedef Classification::Label_set Label_set;
|
||||
typedef Classification::Feature_set Feature_set;
|
||||
|
||||
typedef Classification::Point_set_feature_generator<Kernel, Point_set, Pmap> Feature_generator;
|
||||
|
||||
|
||||
int main (int argc, char** argv)
|
||||
{
|
||||
std::string filename = "data/b9_training.ply";
|
||||
|
||||
if (argc > 1)
|
||||
filename = argv[1];
|
||||
|
||||
std::ifstream in (filename.c_str(), std::ios::binary);
|
||||
Point_set pts;
|
||||
|
||||
std::cerr << "Reading input" << std::endl;
|
||||
in >> pts;
|
||||
|
||||
Imap label_map;
|
||||
bool lm_found = false;
|
||||
boost::tie (label_map, lm_found) = pts.property_map<int> ("label");
|
||||
if (!lm_found)
|
||||
{
|
||||
std::cerr << "Error: \"label\" property not found in input file." << std::endl;
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
std::vector<int> ground_truth;
|
||||
ground_truth.reserve (pts.size());
|
||||
std::copy (pts.range(label_map).begin(), pts.range(label_map).end(),
|
||||
std::back_inserter (ground_truth));
|
||||
|
||||
Feature_set features;
|
||||
|
||||
std::cerr << "Generating features" << std::endl;
|
||||
CGAL::Real_timer t;
|
||||
t.start();
|
||||
Feature_generator generator (pts, pts.point_map(),
|
||||
5); // using 5 scales
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TBB
|
||||
features.begin_parallel_additions();
|
||||
#endif
|
||||
|
||||
generator.generate_point_based_features (features);
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TBB
|
||||
features.end_parallel_additions();
|
||||
#endif
|
||||
|
||||
t.stop();
|
||||
std::cerr << "Done in " << t.time() << " second(s)" << std::endl;
|
||||
|
||||
// Add types
|
||||
Label_set labels;
|
||||
Label_handle ground = labels.add ("ground");
|
||||
Label_handle vegetation = labels.add ("vegetation");
|
||||
Label_handle roof = labels.add ("roof");
|
||||
|
||||
std::vector<int> label_indices(pts.size(), -1);
|
||||
|
||||
std::cerr << "Using TensorFlow neural network Classifier" << std::endl;
|
||||
Classification::TensorFlow::Neural_network_classifier<> classifier (labels, features);
|
||||
|
||||
std::cerr << "Training" << std::endl;
|
||||
t.reset();
|
||||
t.start();
|
||||
classifier.train (ground_truth,
|
||||
true, // restart from scratch
|
||||
100); // 100 iterations
|
||||
t.stop();
|
||||
std::cerr << "Done in " << t.time() << " second(s)" << std::endl;
|
||||
|
||||
t.reset();
|
||||
t.start();
|
||||
Classification::classify_with_graphcut<CGAL::Sequential_tag>
|
||||
(pts, pts.point_map(), labels, classifier,
|
||||
generator.neighborhood().k_neighbor_query(12),
|
||||
0.2f, 1, label_indices);
|
||||
t.stop();
|
||||
|
||||
std::cerr << "Classification with graphcut done in " << t.time() << " second(s)" << std::endl;
|
||||
|
||||
std::cerr << "Precision, recall, F1 scores and IoU:" << std::endl;
|
||||
Classification::Evaluation evaluation (labels, ground_truth, label_indices);
|
||||
|
||||
for (std::size_t i = 0; i < labels.size(); ++ i)
|
||||
{
|
||||
std::cerr << " * " << labels[i]->name() << ": "
|
||||
<< evaluation.precision(labels[i]) << " ; "
|
||||
<< evaluation.recall(labels[i]) << " ; "
|
||||
<< evaluation.f1_score(labels[i]) << " ; "
|
||||
<< evaluation.intersection_over_union(labels[i]) << std::endl;
|
||||
}
|
||||
|
||||
std::cerr << "Accuracy = " << evaluation.accuracy() << std::endl
|
||||
<< "Mean F1 score = " << evaluation.mean_f1_score() << std::endl
|
||||
<< "Mean IoU = " << evaluation.mean_intersection_over_union() << std::endl;
|
||||
|
||||
// Color point set according to class
|
||||
UCmap red = pts.add_property_map<unsigned char>("red", 0).first;
|
||||
UCmap green = pts.add_property_map<unsigned char>("green", 0).first;
|
||||
UCmap blue = pts.add_property_map<unsigned char>("blue", 0).first;
|
||||
|
||||
for (std::size_t i = 0; i < label_indices.size(); ++ i)
|
||||
{
|
||||
label_map[i] = label_indices[i]; // update label map with computed classification
|
||||
|
||||
Label_handle label = labels[label_indices[i]];
|
||||
|
||||
if (label == ground)
|
||||
{
|
||||
red[i] = 245; green[i] = 180; blue[i] = 0;
|
||||
}
|
||||
else if (label == vegetation)
|
||||
{
|
||||
red[i] = 0; green[i] = 255; blue[i] = 27;
|
||||
}
|
||||
else if (label == roof)
|
||||
{
|
||||
red[i] = 255; green[i] = 0; blue[i] = 170;
|
||||
}
|
||||
}
|
||||
|
||||
// Write result
|
||||
std::ofstream f ("classification.ply");
|
||||
f.precision(18);
|
||||
f << pts;
|
||||
|
||||
std::cerr << "All done" << std::endl;
|
||||
|
||||
return EXIT_SUCCESS;
|
||||
}
|
||||
|
|
@ -25,10 +25,14 @@
|
|||
|
||||
#include <CGAL/Classification/classify.h>
|
||||
#include <CGAL/Classification/Sum_of_weighted_features_classifier.h>
|
||||
#include <CGAL/Classification/ETHZ_random_forest_classifier.h>
|
||||
#include <CGAL/Classification/ETHZ/Random_forest_classifier.h>
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
#include <CGAL/Classification/OpenCV_random_forest_classifier.h>
|
||||
#include <CGAL/Classification/OpenCV/Random_forest_classifier.h>
|
||||
#endif
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
#include <CGAL/Classification/TensorFlow/Neural_network_classifier.h>
|
||||
#endif
|
||||
|
||||
#include <CGAL/Classification/Cluster.h>
|
||||
|
|
|
|||
|
|
@ -25,6 +25,7 @@
|
|||
|
||||
#include <CGAL/Classification/Feature_set.h>
|
||||
#include <CGAL/Classification/Label_set.h>
|
||||
#include <CGAL/Classification/internal/verbosity.h>
|
||||
|
||||
#ifdef CGAL_CLASSIFICATION_VERBOSE
|
||||
#define VERBOSE_TREE_PROGRESS 1
|
||||
|
|
@ -41,8 +42,10 @@
|
|||
# pragma warning(disable:4996)
|
||||
#endif
|
||||
|
||||
#include <CGAL/Classification/internal/auxiliary/random-forest/node-gini.hpp>
|
||||
#include <CGAL/Classification/internal/auxiliary/random-forest/forest.hpp>
|
||||
#include <CGAL/Classification/ETHZ/internal/random-forest/node-gini.hpp>
|
||||
#include <CGAL/Classification/ETHZ/internal/random-forest/forest.hpp>
|
||||
|
||||
#include <CGAL/tags.h>
|
||||
|
||||
#include <boost/archive/text_iarchive.hpp>
|
||||
#include <boost/archive/text_oarchive.hpp>
|
||||
|
|
@ -58,16 +61,18 @@ namespace CGAL {
|
|||
|
||||
namespace Classification {
|
||||
|
||||
/*!
|
||||
\ingroup PkgClassificationClassifiers
|
||||
namespace ETHZ {
|
||||
|
||||
\brief %Classifier based on the ETH Zurich version of random forest algorithm \cgalCite{cgal:w-erftl-14}.
|
||||
/*!
|
||||
\ingroup PkgClassificationClassifiersETHZ
|
||||
|
||||
\brief %Classifier based on the ETH Zurich version of the random forest algorithm \cgalCite{cgal:w-erftl-14}.
|
||||
|
||||
\note This classifier is distributed under the MIT license.
|
||||
|
||||
\cgalModels `CGAL::Classification::Classifier`
|
||||
*/
|
||||
class ETHZ_random_forest_classifier
|
||||
class Random_forest_classifier
|
||||
{
|
||||
typedef CGAL::internal::liblearning::RandomForest::RandomForest
|
||||
< CGAL::internal::liblearning::RandomForest::NodeGini
|
||||
|
|
@ -83,16 +88,36 @@ public:
|
|||
/// @{
|
||||
|
||||
/*!
|
||||
\brief Instantiate the classifier using the sets of `labels` and `features`.
|
||||
\brief Instantiates the classifier using the sets of `labels` and `features`.
|
||||
|
||||
*/
|
||||
ETHZ_random_forest_classifier (const Label_set& labels,
|
||||
const Feature_set& features)
|
||||
Random_forest_classifier (const Label_set& labels,
|
||||
const Feature_set& features)
|
||||
: m_labels (labels), m_features (features), m_rfc (NULL)
|
||||
{ }
|
||||
|
||||
/*!
|
||||
\brief Copies the `other` classifier's configuration using another
|
||||
set of `features`.
|
||||
|
||||
This constructor can be used to apply a trained random forest to
|
||||
another data set.
|
||||
|
||||
\warning The feature set should be composed of the same features
|
||||
than the ones used by `other`, and in the same order.
|
||||
|
||||
*/
|
||||
Random_forest_classifier (const Random_forest_classifier& other,
|
||||
const Feature_set& features)
|
||||
: m_labels (other.m_labels), m_features (features), m_rfc (NULL)
|
||||
{
|
||||
std::stringstream stream;
|
||||
other.save_configuration(stream);
|
||||
this->load_configuration(stream);
|
||||
}
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
~ETHZ_random_forest_classifier ()
|
||||
~Random_forest_classifier ()
|
||||
{
|
||||
if (m_rfc != NULL)
|
||||
delete m_rfc;
|
||||
|
|
@ -102,8 +127,23 @@ public:
|
|||
/// @}
|
||||
|
||||
/// \name Training
|
||||
|
||||
/// @{
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
template <typename LabelIndexRange>
|
||||
void train (const LabelIndexRange& ground_truth,
|
||||
bool reset_trees = true,
|
||||
std::size_t num_trees = 25,
|
||||
std::size_t max_depth = 20)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_TBB
|
||||
train<CGAL::Parallel_tag>(ground_truth, reset_trees, num_trees, max_depth);
|
||||
#else
|
||||
train<CGAL::Sequential_tag>(ground_truth, reset_trees, num_trees, max_depth);
|
||||
#endif
|
||||
}
|
||||
/// \endcond
|
||||
|
||||
/*!
|
||||
\brief Runs the training algorithm.
|
||||
|
||||
|
|
@ -114,6 +154,11 @@ public:
|
|||
\pre At least one ground truth item should be assigned to each
|
||||
label.
|
||||
|
||||
\tparam ConcurrencyTag enables sequential versus parallel
|
||||
algorithm. Possible values are `Parallel_tag` (default value is
|
||||
%CGAL is linked with TBB) or `Sequential_tag` (default value
|
||||
otherwise).
|
||||
|
||||
\param ground_truth vector of label indices. It should contain for
|
||||
each input item, in the same order as the input set, the index of
|
||||
the corresponding label in the `Label_set` provided in the
|
||||
|
|
@ -135,7 +180,7 @@ public:
|
|||
will underfit the test data and conversely an overly high value
|
||||
will likely overfit.
|
||||
*/
|
||||
template <typename LabelIndexRange>
|
||||
template <typename ConcurrencyTag, typename LabelIndexRange>
|
||||
void train (const LabelIndexRange& ground_truth,
|
||||
bool reset_trees = true,
|
||||
std::size_t num_trees = 25,
|
||||
|
|
@ -159,7 +204,7 @@ public:
|
|||
}
|
||||
}
|
||||
|
||||
std::cerr << "Using " << gt.size() << " inliers" << std::endl;
|
||||
CGAL_CLASSIFICATION_CERR << "Using " << gt.size() << " inliers" << std::endl;
|
||||
|
||||
CGAL::internal::liblearning::DataView2D<int> label_vector (&(gt[0]), gt.size(), 1);
|
||||
CGAL::internal::liblearning::DataView2D<float> feature_vector(&(ft[0]), gt.size(), ft.size() / gt.size());
|
||||
|
|
@ -175,7 +220,8 @@ public:
|
|||
|
||||
CGAL::internal::liblearning::RandomForest::AxisAlignedRandomSplitGenerator generator;
|
||||
|
||||
m_rfc->train(feature_vector, label_vector, CGAL::internal::liblearning::DataView2D<int>(), generator, 0, false, reset_trees);
|
||||
m_rfc->train<ConcurrencyTag>
|
||||
(feature_vector, label_vector, CGAL::internal::liblearning::DataView2D<int>(), generator, 0, reset_trees, m_labels.size());
|
||||
}
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
|
|
@ -195,10 +241,46 @@ public:
|
|||
for (std::size_t i = 0; i < out.size(); ++ i)
|
||||
out[i] = (std::min) (1.f, (std::max) (0.f, prob[i]));
|
||||
}
|
||||
|
||||
/// \endcond
|
||||
|
||||
/// @}
|
||||
|
||||
/// \name Miscellaneous
|
||||
/// @{
|
||||
|
||||
/*!
|
||||
\brief Computes, for each feature, how many nodes in the forest
|
||||
uses it as a split criterion.
|
||||
|
||||
Each tree of the random forest recursively splits the training
|
||||
data set using at each node one of the input features. This method
|
||||
counts, for each feature, how many times it was selected by the
|
||||
training algorithm as a split criterion.
|
||||
|
||||
This method allows to evaluate how useful a feature was with
|
||||
respect to a training set: if a feature is used a lot, that means
|
||||
that it has a strong discriminative power with respect to how the
|
||||
labels are represented by the feature set; on the contrary, if a
|
||||
feature is not used very often, its discriminative power is
|
||||
probably low; if a feature is _never_ used, it likely has no
|
||||
interest at all and is completely uncorrelated to the label
|
||||
segmentation of the training set.
|
||||
|
||||
\param count vector where the result is stored. After running the
|
||||
method, it contains, for each feature, the number of nodes in the
|
||||
forest that use it as a split criterion, in the same order as the
|
||||
feature set order.
|
||||
*/
|
||||
void get_feature_usage (std::vector<std::size_t>& count) const
|
||||
{
|
||||
count.clear();
|
||||
count.resize(m_features.size(), 0);
|
||||
return m_rfc->get_feature_usage(count);
|
||||
}
|
||||
|
||||
/// @}
|
||||
|
||||
/// \name Input/Output
|
||||
/// @{
|
||||
|
||||
|
|
@ -211,7 +293,7 @@ public:
|
|||
The output file is written in an GZIP container that is readable
|
||||
by the `load_configuration()` method.
|
||||
*/
|
||||
void save_configuration (std::ostream& output)
|
||||
void save_configuration (std::ostream& output) const
|
||||
{
|
||||
boost::iostreams::filtering_ostream outs;
|
||||
outs.push(boost::iostreams::gzip_compressor());
|
||||
|
|
@ -247,6 +329,13 @@ public:
|
|||
|
||||
}
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
// Backward compatibility
|
||||
typedef ETHZ::Random_forest_classifier ETHZ_random_forest_classifier;
|
||||
/// \endcond
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
#endif // CGAL_CLASSIFICATION_ETHZ_RANDOM_FOREST_CLASSIFIER_H
|
||||
|
|
@ -27,6 +27,16 @@
|
|||
// Modifications from original library:
|
||||
// * changed inclusion protection tag
|
||||
// * moved to namespace CGAL::internal::
|
||||
// * init_feature_class_data() does not resize anymore (it's done
|
||||
// later directly in the splitter). WARNING: all splitters other
|
||||
// than the default won't be working correctly (but experimentally
|
||||
// they are less good and we don't use them - we keep them just in
|
||||
// case)
|
||||
// * sample reduction is now 36.8% (to account for the correction of
|
||||
// the randomization of the input which used to implicitly ignore
|
||||
// this proportion of items)
|
||||
// * map_points() in axis aligned splitter now only uses a subset of
|
||||
// the points for evaluation (for timing optimization=
|
||||
|
||||
#ifndef CGAL_INTERNAL_LIBLEARNING_RANDOMFOREST_COMMON_LIBRARIES_H
|
||||
#define CGAL_INTERNAL_LIBLEARNING_RANDOMFOREST_COMMON_LIBRARIES_H
|
||||
|
|
@ -62,9 +72,9 @@ namespace liblearning {
|
|||
namespace RandomForest {
|
||||
|
||||
typedef std::vector< std::pair<float, int> > FeatureClassDataFloat;
|
||||
inline void init_feature_class_data(FeatureClassDataFloat& data, int /*n_classes*/, int n_samples)
|
||||
inline void init_feature_class_data(FeatureClassDataFloat& /*data*/, int /*n_classes*/, int /* n_samples */)
|
||||
{
|
||||
data.resize(n_samples);
|
||||
// data.resize(n_samples);
|
||||
}
|
||||
typedef boost::unordered_set<int> FeatureSet;
|
||||
|
||||
|
|
@ -97,7 +107,7 @@ struct ForestParams {
|
|||
max_depth(42),
|
||||
n_trees(100),
|
||||
min_samples_per_node(5),
|
||||
sample_reduction(0)
|
||||
sample_reduction(0.368f)
|
||||
{}
|
||||
template <typename Archive>
|
||||
void serialize(Archive& ar, unsigned /*version*/)
|
||||
|
|
@ -222,15 +232,21 @@ struct AxisAlignedSplitter {
|
|||
int n_samples,
|
||||
FeatureClassData& data_points) const
|
||||
{
|
||||
for (int i_sample = 0; i_sample < n_samples; ++i_sample) {
|
||||
// determine index of this sample ...
|
||||
int sample_idx = sample_idxes[i_sample];
|
||||
// determine class ...
|
||||
int sample_class = labels(sample_idx, 0);
|
||||
// determine value of the selected feature for this sample
|
||||
FeatureType sample_fval = samples(sample_idx, feature);
|
||||
data_points[i_sample] = std::make_pair(sample_fval, sample_class);
|
||||
}
|
||||
std::size_t size = (std::min)(std::size_t(5000), std::size_t(n_samples));
|
||||
data_points.clear();
|
||||
data_points.reserve(size);
|
||||
|
||||
std::size_t step = n_samples / size;
|
||||
|
||||
for (int i_sample = 0; i_sample < n_samples; i_sample += step) {
|
||||
// determine index of this sample ...
|
||||
int sample_idx = sample_idxes[i_sample];
|
||||
// determine class ...
|
||||
int sample_class = labels(sample_idx, 0);
|
||||
// determine value of the selected feature for this sample
|
||||
FeatureType sample_fval = samples(sample_idx, feature);
|
||||
data_points.push_back(std::make_pair(sample_fval, sample_class));
|
||||
}
|
||||
}
|
||||
template <typename Archive>
|
||||
void serialize(Archive& ar, unsigned /*version*/)
|
||||
|
|
@ -29,6 +29,12 @@
|
|||
// * moved to namespace CGAL::internal::
|
||||
// * add parameter "reset_trees" to train() to be able to construct
|
||||
// forest with several iterations
|
||||
// * training algorithm has been parallelized with Intel TBB
|
||||
// * remove the unused feature "register_obb"
|
||||
// * add option to not count labels (if it's know before)
|
||||
// * fix the randomization of input (which was implicitly losing
|
||||
// samples)
|
||||
// * add method to get feature usage
|
||||
|
||||
#ifndef CGAL_INTERNAL_LIBLEARNING_RANDOMFOREST_FOREST_H
|
||||
#define CGAL_INTERNAL_LIBLEARNING_RANDOMFOREST_FOREST_H
|
||||
|
|
@ -39,11 +45,80 @@
|
|||
#include <cstdio>
|
||||
#endif
|
||||
|
||||
#include <CGAL/tags.h>
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TBB
|
||||
#include <tbb/parallel_for.h>
|
||||
#include <tbb/blocked_range.h>
|
||||
#include <tbb/scalable_allocator.h>
|
||||
#include <tbb/mutex.h>
|
||||
#endif // CGAL_LINKED_WITH_TBB
|
||||
|
||||
|
||||
namespace CGAL { namespace internal {
|
||||
|
||||
|
||||
|
||||
namespace liblearning {
|
||||
namespace RandomForest {
|
||||
|
||||
template <typename NodeT, typename SplitGenerator>
|
||||
class Tree_training_functor
|
||||
{
|
||||
typedef typename NodeT::ParamType ParamType;
|
||||
typedef typename NodeT::FeatureType FeatureType;
|
||||
typedef Tree<NodeT> TreeType;
|
||||
|
||||
std::size_t seed_start;
|
||||
const std::vector<int>& sample_idxes;
|
||||
boost::ptr_vector<Tree<NodeT> >& trees;
|
||||
DataView2D<FeatureType> samples;
|
||||
DataView2D<int> labels;
|
||||
std::size_t n_in_bag_samples;
|
||||
const SplitGenerator& split_generator;
|
||||
|
||||
public:
|
||||
|
||||
Tree_training_functor(std::size_t seed_start,
|
||||
const std::vector<int>& sample_idxes,
|
||||
boost::ptr_vector<Tree<NodeT> >& trees,
|
||||
DataView2D<FeatureType> samples,
|
||||
DataView2D<int> labels,
|
||||
std::size_t n_in_bag_samples,
|
||||
const SplitGenerator& split_generator)
|
||||
: seed_start (seed_start)
|
||||
, sample_idxes (sample_idxes)
|
||||
, trees (trees)
|
||||
, samples (samples)
|
||||
, labels (labels)
|
||||
, n_in_bag_samples(n_in_bag_samples)
|
||||
, split_generator(split_generator)
|
||||
{ }
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TBB
|
||||
void operator()(const tbb::blocked_range<std::size_t>& r) const
|
||||
{
|
||||
for (std::size_t s = r.begin(); s != r.end(); ++ s)
|
||||
apply(s);
|
||||
}
|
||||
#endif // CGAL_LINKED_WITH_TBB
|
||||
|
||||
inline void apply (std::size_t i_tree) const
|
||||
{
|
||||
// initialize random generator with sequential seeds (one for each
|
||||
// tree)
|
||||
RandomGen gen(seed_start + i_tree);
|
||||
std::vector<int> in_bag_samples = sample_idxes;
|
||||
|
||||
// Bagging: draw random sample indexes used for this tree
|
||||
std::random_shuffle (in_bag_samples.begin(),in_bag_samples.end());
|
||||
|
||||
// Train the tree
|
||||
trees[i_tree].train(samples, labels, &in_bag_samples[0], n_in_bag_samples, split_generator, gen);
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
template <typename NodeT>
|
||||
class RandomForest {
|
||||
public:
|
||||
|
|
@ -52,28 +127,29 @@ public:
|
|||
typedef Tree<NodeT> TreeType;
|
||||
ParamType params;
|
||||
|
||||
std::vector<uint8_t> was_oob_data;
|
||||
DataView2D<uint8_t> was_oob;
|
||||
|
||||
boost::ptr_vector< Tree<NodeT> > trees;
|
||||
|
||||
RandomForest() {}
|
||||
RandomForest(ParamType const& params) : params(params) {}
|
||||
|
||||
template<typename SplitGenerator>
|
||||
template<typename ConcurrencyTag, typename SplitGenerator>
|
||||
void train(DataView2D<FeatureType> samples,
|
||||
DataView2D<int> labels,
|
||||
DataView2D<int> train_sample_idxes,
|
||||
SplitGenerator const& split_generator,
|
||||
size_t seed_start = 1,
|
||||
bool register_oob = true,
|
||||
bool reset_trees = true
|
||||
bool reset_trees = true,
|
||||
std::size_t n_classes = std::size_t(-1)
|
||||
)
|
||||
{
|
||||
if (reset_trees)
|
||||
trees.clear();
|
||||
|
||||
params.n_classes = *std::max_element(&labels(0,0), &labels(0,0)+labels.num_elements()) + 1;
|
||||
if (n_classes == std::size_t(-1))
|
||||
params.n_classes = *std::max_element(&labels(0,0), &labels(0,0)+labels.num_elements()) + 1;
|
||||
else
|
||||
params.n_classes = n_classes;
|
||||
|
||||
params.n_features = samples.cols;
|
||||
params.n_samples = samples.rows;
|
||||
|
||||
|
|
@ -93,42 +169,31 @@ public:
|
|||
size_t n_idxes = sample_idxes.size();
|
||||
params.n_in_bag_samples = n_idxes * (1 - params.sample_reduction);
|
||||
|
||||
// Random distribution over indexes
|
||||
UniformIntDist dist(0, n_idxes - 1);
|
||||
|
||||
// Store for each sample and each tree if sample was used for tree
|
||||
if (register_oob) {
|
||||
was_oob_data.assign(n_idxes*params.n_trees, 1);
|
||||
was_oob = DataView2D<uint8_t>(&was_oob_data[0], n_idxes, params.n_trees);
|
||||
}
|
||||
|
||||
std::size_t nb_trees = trees.size();
|
||||
for (size_t i_tree = nb_trees; i_tree < nb_trees + params.n_trees; ++i_tree) {
|
||||
for (std::size_t i_tree = nb_trees; i_tree < nb_trees + params.n_trees; ++ i_tree)
|
||||
trees.push_back (new TreeType(¶ms));
|
||||
|
||||
Tree_training_functor<NodeT, SplitGenerator>
|
||||
f (seed_start, sample_idxes, trees, samples, labels, params.n_in_bag_samples, split_generator);
|
||||
|
||||
#ifndef CGAL_LINKED_WITH_TBB
|
||||
CGAL_static_assertion_msg (!(boost::is_convertible<ConcurrencyTag, Parallel_tag>::value),
|
||||
"Parallel_tag is enabled but TBB is unavailable.");
|
||||
#else
|
||||
if (boost::is_convertible<ConcurrencyTag,Parallel_tag>::value)
|
||||
{
|
||||
tbb::parallel_for(tbb::blocked_range<size_t>(nb_trees, nb_trees + params.n_trees), f);
|
||||
}
|
||||
else
|
||||
#endif
|
||||
{
|
||||
for (size_t i_tree = nb_trees; i_tree < nb_trees + params.n_trees; ++i_tree)
|
||||
{
|
||||
#if VERBOSE_TREE_PROGRESS
|
||||
std::printf("Training tree %zu/%zu, max depth %zu\n", i_tree+1, nb_trees + params.n_trees, params.max_depth);
|
||||
#endif
|
||||
// new tree
|
||||
trees.push_back(new TreeType(¶ms));
|
||||
// initialize random generator with sequential seeds (one for each
|
||||
// tree)
|
||||
RandomGen gen(seed_start + i_tree);
|
||||
// Bagging: draw random sample indexes used for this tree
|
||||
std::vector<int> in_bag_samples(params.n_in_bag_samples);
|
||||
for (size_t i_sample = 0; i_sample < in_bag_samples.size(); ++i_sample) {
|
||||
int random_idx = dist(gen);
|
||||
in_bag_samples[i_sample] = sample_idxes[random_idx];
|
||||
if (register_oob && was_oob(random_idx, i_tree)) {
|
||||
was_oob(random_idx, i_tree) = 0;
|
||||
}
|
||||
}
|
||||
#ifdef TREE_GRAPHVIZ_STREAM
|
||||
TREE_GRAPHVIZ_STREAM << "digraph Tree {" << std::endl;
|
||||
#endif
|
||||
// Train the tree
|
||||
trees.back().train(samples, labels, &in_bag_samples[0], in_bag_samples.size(), split_generator, gen);
|
||||
#ifdef TREE_GRAPHVIZ_STREAM
|
||||
TREE_GRAPHVIZ_STREAM << "}" << std::endl << std::endl;
|
||||
#endif
|
||||
f.apply(i_tree);
|
||||
}
|
||||
}
|
||||
}
|
||||
int evaluate(FeatureType const* sample, float* results) {
|
||||
|
|
@ -177,6 +242,12 @@ public:
|
|||
ar & BOOST_SERIALIZATION_NVP(params);
|
||||
ar & BOOST_SERIALIZATION_NVP(trees);
|
||||
}
|
||||
|
||||
void get_feature_usage (std::vector<std::size_t>& count) const
|
||||
{
|
||||
for (std::size_t i_tree = 0; i_tree < trees.size(); ++i_tree)
|
||||
trees[i_tree].get_feature_usage(count);
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
|
@ -28,6 +28,11 @@
|
|||
// * changed inclusion protection tag
|
||||
// * moved to namespace CGAL::internal::
|
||||
|
||||
// * improve sorting algorithm by only comparing the first of pair
|
||||
// (second is useless)
|
||||
|
||||
|
||||
|
||||
#ifndef CGAL_INTERNAL_LIBLEARNING_RANDOMFOREST_NODE_GINI_H
|
||||
#define CGAL_INTERNAL_LIBLEARNING_RANDOMFOREST_NODE_GINI_H
|
||||
#include "node.hpp"
|
||||
|
|
@ -79,7 +84,13 @@ public:
|
|||
n_r += 1;
|
||||
}
|
||||
// sort data so thresholding is easy based on position in array
|
||||
std::sort(data_points.begin(), data_points.end());
|
||||
std::sort(data_points.begin(), data_points.end(),
|
||||
[&](const std::pair<float, int>& a,
|
||||
const std::pair<float, int>& b) -> bool
|
||||
{
|
||||
return a.first < b.first;
|
||||
});
|
||||
|
||||
// loop over data, update class distributions left&right
|
||||
for (size_t i_point = 1; i_point < data_points.size(); ++i_point) {
|
||||
int cls = data_points[i_point-1].second;
|
||||
|
|
@ -30,6 +30,7 @@
|
|||
// * fix computation of node_dist[label] so that results are always <= 1.0
|
||||
// * change serialization functions to avoid a bug with boost and some
|
||||
// compilers (that leads to dereferencing a null pointer)
|
||||
// * add a method to get feature usage
|
||||
|
||||
#ifndef CGAL_INTERNAL_LIBLEARNING_RANDOMFORESTS_NODE_H
|
||||
#define CGAL_INTERNAL_LIBLEARNING_RANDOMFORESTS_NODE_H
|
||||
|
|
@ -257,6 +258,16 @@ public:
|
|||
ar & BOOST_SERIALIZATION_NVP(right);
|
||||
}
|
||||
}
|
||||
|
||||
void get_feature_usage (std::vector<std::size_t>& count) const
|
||||
{
|
||||
if (!is_leaf)
|
||||
{
|
||||
count[std::size_t(splitter.feature)] ++;
|
||||
left->get_feature_usage(count);
|
||||
right->get_feature_usage(count);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
|
@ -27,6 +27,7 @@
|
|||
// Modifications from original library:
|
||||
// * changed inclusion protection tag
|
||||
// * moved to namespace CGAL::internal::
|
||||
// * add a method to get feature usage
|
||||
|
||||
#ifndef CGAL_INTERNAL_LIBLEARNING_RANDOMFOREST_TREE_H
|
||||
#define CGAL_INTERNAL_LIBLEARNING_RANDOMFOREST_TREE_H
|
||||
|
|
@ -135,6 +136,10 @@ public:
|
|||
ar & BOOST_SERIALIZATION_NVP(params);
|
||||
ar & BOOST_SERIALIZATION_NVP(root_node);
|
||||
}
|
||||
void get_feature_usage (std::vector<std::size_t>& count) const
|
||||
{
|
||||
root_node->get_feature_usage(count);
|
||||
}
|
||||
};
|
||||
|
||||
}
|
||||
|
|
@ -85,6 +85,8 @@ public:
|
|||
: grid (grid)
|
||||
{
|
||||
this->set_name ("echo_scatter");
|
||||
if (radius_neighbors < 0.)
|
||||
radius_neighbors = 3.f * grid.resolution();
|
||||
|
||||
if (grid.width() * grid.height() > input.size())
|
||||
echo_scatter.resize(input.size(), compressed_float(0));
|
||||
|
|
|
|||
|
|
@ -91,7 +91,7 @@ public:
|
|||
{
|
||||
this->set_name ("elevation");
|
||||
if (radius_dtm < 0.)
|
||||
radius_dtm = 100.f * grid.resolution();
|
||||
radius_dtm = 10.f * grid.resolution();
|
||||
|
||||
//DEM
|
||||
Image_float dem(grid.width(),grid.height());
|
||||
|
|
|
|||
|
|
@ -0,0 +1,142 @@
|
|||
// Copyright (c) 2012 INRIA Sophia-Antipolis (France).
|
||||
// Copyright (c) 2017 GeometryFactory Sarl (France).
|
||||
// All rights reserved.
|
||||
//
|
||||
// This file is part of CGAL (www.cgal.org).
|
||||
// You can redistribute it and/or modify it under the terms of the GNU
|
||||
// General Public License as published by the Free Software Foundation,
|
||||
// either version 3 of the License, or (at your option) any later version.
|
||||
//
|
||||
// Licensees holding a valid commercial license may use this file in
|
||||
// accordance with the commercial license agreement provided with the software.
|
||||
//
|
||||
// This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
|
||||
// WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
|
||||
//
|
||||
// $URL$
|
||||
// $Id$
|
||||
// SPDX-License-Identifier: GPL-3.0+
|
||||
//
|
||||
// Author(s) : Florent Lafarge, Simon Giraudot
|
||||
|
||||
#ifndef CGAL_CLASSIFICATION_FEATURE_HEIGHT_ABOVE_H
|
||||
#define CGAL_CLASSIFICATION_FEATURE_HEIGHT_ABOVE_H
|
||||
|
||||
#include <CGAL/license/Classification.h>
|
||||
|
||||
#include <vector>
|
||||
|
||||
#include <CGAL/Classification/Feature_base.h>
|
||||
#include <CGAL/Classification/compressed_float.h>
|
||||
#include <CGAL/Classification/Image.h>
|
||||
#include <CGAL/Classification/Planimetric_grid.h>
|
||||
|
||||
namespace CGAL {
|
||||
|
||||
namespace Classification {
|
||||
|
||||
namespace Feature {
|
||||
|
||||
/*!
|
||||
\ingroup PkgClassificationFeatures
|
||||
|
||||
%Feature based on local height distribution This feature computes
|
||||
the distance between the maximum height on the local cell of the
|
||||
planimetric grid and a point's height.
|
||||
|
||||
Its default name is "height_above".
|
||||
|
||||
\tparam GeomTraits model of \cgal Kernel.
|
||||
\tparam PointRange model of `ConstRange`. Its iterator type
|
||||
is `RandomAccessIterator` and its value type is the key type of
|
||||
`PointMap`.
|
||||
\tparam PointMap model of `ReadablePropertyMap` whose key
|
||||
type is the value type of the iterator of `PointRange` and value type
|
||||
is `GeomTraits::Point_3`.
|
||||
|
||||
*/
|
||||
template <typename GeomTraits, typename PointRange, typename PointMap>
|
||||
class Height_above : public Feature_base
|
||||
{
|
||||
typedef typename GeomTraits::Iso_cuboid_3 Iso_cuboid_3;
|
||||
|
||||
typedef Image<float> Image_float;
|
||||
typedef Planimetric_grid<GeomTraits, PointRange, PointMap> Grid;
|
||||
|
||||
const PointRange& input;
|
||||
PointMap point_map;
|
||||
const Grid& grid;
|
||||
Image_float dtm;
|
||||
std::vector<float> values;
|
||||
|
||||
public:
|
||||
/*!
|
||||
\brief Constructs the feature.
|
||||
|
||||
\param input point range.
|
||||
\param point_map property map to access the input points.
|
||||
\param grid precomputed `Planimetric_grid`.
|
||||
*/
|
||||
Height_above (const PointRange& input,
|
||||
PointMap point_map,
|
||||
const Grid& grid)
|
||||
: input(input), point_map(point_map), grid(grid)
|
||||
{
|
||||
this->set_name ("height_above");
|
||||
|
||||
dtm = Image_float(grid.width(),grid.height());
|
||||
|
||||
for (std::size_t j = 0; j < grid.height(); ++ j)
|
||||
for (std::size_t i = 0; i < grid.width(); ++ i)
|
||||
if (grid.has_points(i,j))
|
||||
{
|
||||
float z_max = -std::numeric_limits<float>::max();
|
||||
|
||||
typename Grid::iterator end = grid.indices_end(i,j);
|
||||
for (typename Grid::iterator it = grid.indices_begin(i,j); it != end; ++ it)
|
||||
{
|
||||
float z = float(get(point_map, *(input.begin()+(*it))).z());
|
||||
z_max = (std::max(z_max, z));
|
||||
}
|
||||
|
||||
dtm(i,j) = z_max;
|
||||
}
|
||||
|
||||
if (grid.width() * grid.height() > input.size())
|
||||
{
|
||||
values.resize (input.size(), 0.f);
|
||||
for (std::size_t i = 0; i < input.size(); ++ i)
|
||||
{
|
||||
std::size_t I = grid.x(i);
|
||||
std::size_t J = grid.y(i);
|
||||
values[i] = float(dtm(I,J) - get (point_map, *(input.begin() + i)).z());
|
||||
}
|
||||
dtm.free();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
virtual float value (std::size_t pt_index)
|
||||
{
|
||||
if (values.empty())
|
||||
{
|
||||
std::size_t I = grid.x(pt_index);
|
||||
std::size_t J = grid.y(pt_index);
|
||||
return dtm(I,J) - float(get (point_map, *(input.begin() + pt_index)).z());
|
||||
}
|
||||
|
||||
return values[pt_index];
|
||||
}
|
||||
|
||||
/// \endcond
|
||||
};
|
||||
|
||||
} // namespace Feature
|
||||
|
||||
} // namespace Classification
|
||||
|
||||
|
||||
} // namespace CGAL
|
||||
|
||||
#endif // CGAL_CLASSIFICATION_FEATURE_HEIGHT_ABOVE_H
|
||||
|
|
@ -0,0 +1,142 @@
|
|||
// Copyright (c) 2012 INRIA Sophia-Antipolis (France).
|
||||
// Copyright (c) 2017 GeometryFactory Sarl (France).
|
||||
// All rights reserved.
|
||||
//
|
||||
// This file is part of CGAL (www.cgal.org).
|
||||
// You can redistribute it and/or modify it under the terms of the GNU
|
||||
// General Public License as published by the Free Software Foundation,
|
||||
// either version 3 of the License, or (at your option) any later version.
|
||||
//
|
||||
// Licensees holding a valid commercial license may use this file in
|
||||
// accordance with the commercial license agreement provided with the software.
|
||||
//
|
||||
// This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
|
||||
// WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
|
||||
//
|
||||
// $URL$
|
||||
// $Id$
|
||||
// SPDX-License-Identifier: GPL-3.0+
|
||||
//
|
||||
// Author(s) : Florent Lafarge, Simon Giraudot
|
||||
|
||||
#ifndef CGAL_CLASSIFICATION_FEATURE_HEIGHT_BELOW_H
|
||||
#define CGAL_CLASSIFICATION_FEATURE_HEIGHT_BELOW_H
|
||||
|
||||
#include <CGAL/license/Classification.h>
|
||||
|
||||
#include <vector>
|
||||
|
||||
#include <CGAL/Classification/Feature_base.h>
|
||||
#include <CGAL/Classification/compressed_float.h>
|
||||
#include <CGAL/Classification/Image.h>
|
||||
#include <CGAL/Classification/Planimetric_grid.h>
|
||||
|
||||
namespace CGAL {
|
||||
|
||||
namespace Classification {
|
||||
|
||||
namespace Feature {
|
||||
|
||||
/*!
|
||||
\ingroup PkgClassificationFeatures
|
||||
|
||||
%Feature based on local height distribution This feature computes
|
||||
the distance between a point's height and the minimum height on
|
||||
the local cell of the planimetric grid.
|
||||
|
||||
Its default name is "height_below".
|
||||
|
||||
\tparam GeomTraits model of \cgal Kernel.
|
||||
\tparam PointRange model of `ConstRange`. Its iterator type
|
||||
is `RandomAccessIterator` and its value type is the key type of
|
||||
`PointMap`.
|
||||
\tparam PointMap model of `ReadablePropertyMap` whose key
|
||||
type is the value type of the iterator of `PointRange` and value type
|
||||
is `GeomTraits::Point_3`.
|
||||
|
||||
*/
|
||||
template <typename GeomTraits, typename PointRange, typename PointMap>
|
||||
class Height_below : public Feature_base
|
||||
{
|
||||
typedef typename GeomTraits::Iso_cuboid_3 Iso_cuboid_3;
|
||||
|
||||
typedef Image<float> Image_float;
|
||||
typedef Planimetric_grid<GeomTraits, PointRange, PointMap> Grid;
|
||||
|
||||
const PointRange& input;
|
||||
PointMap point_map;
|
||||
const Grid& grid;
|
||||
Image_float dtm;
|
||||
std::vector<float> values;
|
||||
|
||||
public:
|
||||
/*!
|
||||
\brief Constructs the feature.
|
||||
|
||||
\param input point range.
|
||||
\param point_map property map to access the input points.
|
||||
\param grid precomputed `Planimetric_grid`.
|
||||
*/
|
||||
Height_below (const PointRange& input,
|
||||
PointMap point_map,
|
||||
const Grid& grid)
|
||||
: input(input), point_map(point_map), grid(grid)
|
||||
{
|
||||
this->set_name ("height_below");
|
||||
|
||||
dtm = Image_float(grid.width(),grid.height());
|
||||
|
||||
for (std::size_t j = 0; j < grid.height(); ++ j)
|
||||
for (std::size_t i = 0; i < grid.width(); ++ i)
|
||||
if (grid.has_points(i,j))
|
||||
{
|
||||
float z_min = std::numeric_limits<float>::max();
|
||||
|
||||
typename Grid::iterator end = grid.indices_end(i,j);
|
||||
for (typename Grid::iterator it = grid.indices_begin(i,j); it != end; ++ it)
|
||||
{
|
||||
float z = float(get(point_map, *(input.begin()+(*it))).z());
|
||||
z_min = (std::min(z_min, z));
|
||||
}
|
||||
|
||||
dtm(i,j) = z_min;
|
||||
}
|
||||
|
||||
if (grid.width() * grid.height() > input.size())
|
||||
{
|
||||
values.resize (input.size(), 0.f);
|
||||
for (std::size_t i = 0; i < input.size(); ++ i)
|
||||
{
|
||||
std::size_t I = grid.x(i);
|
||||
std::size_t J = grid.y(i);
|
||||
values[i] = float(get (point_map, *(input.begin() + i)).z() - dtm(I,J));
|
||||
}
|
||||
dtm.free();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
virtual float value (std::size_t pt_index)
|
||||
{
|
||||
if (values.empty())
|
||||
{
|
||||
std::size_t I = grid.x(pt_index);
|
||||
std::size_t J = grid.y(pt_index);
|
||||
return float(get (point_map, *(input.begin() + pt_index)).z() - dtm(I,J));
|
||||
}
|
||||
|
||||
return values[pt_index];
|
||||
}
|
||||
|
||||
/// \endcond
|
||||
};
|
||||
|
||||
} // namespace Feature
|
||||
|
||||
} // namespace Classification
|
||||
|
||||
|
||||
} // namespace CGAL
|
||||
|
||||
#endif // CGAL_CLASSIFICATION_FEATURE_HEIGHT_BELOW_H
|
||||
|
|
@ -0,0 +1,144 @@
|
|||
// Copyright (c) 2012 INRIA Sophia-Antipolis (France).
|
||||
// Copyright (c) 2017 GeometryFactory Sarl (France).
|
||||
// All rights reserved.
|
||||
//
|
||||
// This file is part of CGAL (www.cgal.org).
|
||||
// You can redistribute it and/or modify it under the terms of the GNU
|
||||
// General Public License as published by the Free Software Foundation,
|
||||
// either version 3 of the License, or (at your option) any later version.
|
||||
//
|
||||
// Licensees holding a valid commercial license may use this file in
|
||||
// accordance with the commercial license agreement provided with the software.
|
||||
//
|
||||
// This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
|
||||
// WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
|
||||
//
|
||||
// $URL$
|
||||
// $Id$
|
||||
// SPDX-License-Identifier: GPL-3.0+
|
||||
//
|
||||
// Author(s) : Florent Lafarge, Simon Giraudot
|
||||
|
||||
#ifndef CGAL_CLASSIFICATION_FEATURE_VERTICAL_RANGE_H
|
||||
#define CGAL_CLASSIFICATION_FEATURE_VERTICAL_RANGE_H
|
||||
|
||||
#include <CGAL/license/Classification.h>
|
||||
|
||||
#include <vector>
|
||||
|
||||
#include <CGAL/Classification/Feature_base.h>
|
||||
#include <CGAL/Classification/compressed_float.h>
|
||||
#include <CGAL/Classification/Image.h>
|
||||
#include <CGAL/Classification/Planimetric_grid.h>
|
||||
|
||||
namespace CGAL {
|
||||
|
||||
namespace Classification {
|
||||
|
||||
namespace Feature {
|
||||
|
||||
/*!
|
||||
\ingroup PkgClassificationFeatures
|
||||
|
||||
%Feature based on local height distribution. This feature computes
|
||||
the distance between the maximum and the minimum height on the
|
||||
local cell of the planimetric grid.
|
||||
|
||||
Its default name is "vertical_range".
|
||||
|
||||
\tparam GeomTraits model of \cgal Kernel.
|
||||
\tparam PointRange model of `ConstRange`. Its iterator type
|
||||
is `RandomAccessIterator` and its value type is the key type of
|
||||
`PointMap`.
|
||||
\tparam PointMap model of `ReadablePropertyMap` whose key
|
||||
type is the value type of the iterator of `PointRange` and value type
|
||||
is `GeomTraits::Point_3`.
|
||||
|
||||
*/
|
||||
template <typename GeomTraits, typename PointRange, typename PointMap>
|
||||
class Vertical_range : public Feature_base
|
||||
{
|
||||
typedef typename GeomTraits::Iso_cuboid_3 Iso_cuboid_3;
|
||||
|
||||
typedef Image<float> Image_float;
|
||||
typedef Planimetric_grid<GeomTraits, PointRange, PointMap> Grid;
|
||||
|
||||
const PointRange& input;
|
||||
PointMap point_map;
|
||||
const Grid& grid;
|
||||
Image_float dtm;
|
||||
std::vector<float> values;
|
||||
|
||||
public:
|
||||
/*!
|
||||
\brief Constructs the feature.
|
||||
|
||||
\param input point range.
|
||||
\param point_map property map to access the input points.
|
||||
\param grid precomputed `Planimetric_grid`.
|
||||
*/
|
||||
Vertical_range (const PointRange& input,
|
||||
PointMap point_map,
|
||||
const Grid& grid)
|
||||
: input(input), point_map(point_map), grid(grid)
|
||||
{
|
||||
this->set_name ("vertical_range");
|
||||
|
||||
dtm = Image_float(grid.width(),grid.height());
|
||||
|
||||
for (std::size_t j = 0; j < grid.height(); ++ j)
|
||||
for (std::size_t i = 0; i < grid.width(); ++ i)
|
||||
if (grid.has_points(i,j))
|
||||
{
|
||||
float z_max = -std::numeric_limits<float>::max();
|
||||
float z_min = std::numeric_limits<float>::max();
|
||||
|
||||
typename Grid::iterator end = grid.indices_end(i,j);
|
||||
for (typename Grid::iterator it = grid.indices_begin(i,j); it != end; ++ it)
|
||||
{
|
||||
float z = float(get(point_map, *(input.begin()+(*it))).z());
|
||||
z_max = (std::max(z_max, z));
|
||||
z_min = (std::min(z_min, z));
|
||||
}
|
||||
|
||||
dtm(i,j) = z_max - z_min;
|
||||
}
|
||||
|
||||
if (grid.width() * grid.height() > input.size())
|
||||
{
|
||||
values.resize (input.size(), 0.f);
|
||||
for (std::size_t i = 0; i < input.size(); ++ i)
|
||||
{
|
||||
std::size_t I = grid.x(i);
|
||||
std::size_t J = grid.y(i);
|
||||
values[i] = dtm(I,J);
|
||||
}
|
||||
dtm.free();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
virtual float value (std::size_t pt_index)
|
||||
{
|
||||
if (values.empty())
|
||||
{
|
||||
std::size_t I = grid.x(pt_index);
|
||||
std::size_t J = grid.y(pt_index);
|
||||
return dtm(I,J);
|
||||
}
|
||||
|
||||
return values[pt_index];
|
||||
}
|
||||
|
||||
/// \endcond
|
||||
};
|
||||
|
||||
} // namespace Feature
|
||||
|
||||
} // namespace Classification
|
||||
|
||||
|
||||
} // namespace CGAL
|
||||
|
||||
#endif // CGAL_CLASSIFICATION_FEATURE_VERTICAL_RANGE_H
|
||||
|
|
@ -41,6 +41,8 @@ class Image
|
|||
|
||||
std::size_t m_width;
|
||||
std::size_t m_height;
|
||||
std::size_t m_depth;
|
||||
|
||||
boost::shared_ptr<Vector> m_raw;
|
||||
boost::shared_ptr<Map> m_sparse;
|
||||
Type m_default;
|
||||
|
|
@ -52,18 +54,19 @@ class Image
|
|||
|
||||
public:
|
||||
|
||||
Image () : m_width(0), m_height(0), m_raw (NULL)
|
||||
Image () : m_width(0), m_height(0), m_depth(0), m_raw (NULL)
|
||||
{
|
||||
}
|
||||
|
||||
Image (std::size_t width, std::size_t height)
|
||||
: m_width (width),
|
||||
m_height (height)
|
||||
Image (std::size_t width, std::size_t height, std::size_t depth = 1)
|
||||
: m_width (width)
|
||||
, m_height (height)
|
||||
, m_depth (depth)
|
||||
{
|
||||
if (m_width * m_height > 0)
|
||||
if (m_width * m_height * m_depth > 0)
|
||||
{
|
||||
if (m_width * m_height < CGAL_CLASSIFICATION_IMAGE_SIZE_LIMIT)
|
||||
m_raw = boost::shared_ptr<Vector> (new Vector(m_width * m_height));
|
||||
if (m_width * m_height * m_depth < CGAL_CLASSIFICATION_IMAGE_SIZE_LIMIT)
|
||||
m_raw = boost::shared_ptr<Vector> (new Vector(m_width * m_height * m_depth));
|
||||
else
|
||||
m_sparse = boost::shared_ptr<Map> (new Map());
|
||||
}
|
||||
|
|
@ -85,33 +88,41 @@ public:
|
|||
m_sparse = other.m_sparse;
|
||||
m_width = other.width();
|
||||
m_height = other.height();
|
||||
m_depth = other.depth();
|
||||
return *this;
|
||||
}
|
||||
|
||||
std::size_t width() const { return m_width; }
|
||||
std::size_t height() const { return m_height; }
|
||||
std::size_t depth() const { return m_depth; }
|
||||
|
||||
Type& operator() (const std::size_t& x, const std::size_t& y)
|
||||
inline std::size_t coord (const std::size_t& x, const std::size_t& y, const std::size_t& z) const
|
||||
{
|
||||
return z + (m_depth * y) + (m_depth * m_height * x);
|
||||
}
|
||||
|
||||
Type& operator() (const std::size_t& x, const std::size_t& y, const std::size_t& z = 0)
|
||||
{
|
||||
if (m_raw == boost::shared_ptr<Vector>()) // sparse case
|
||||
{
|
||||
typename Map::iterator inserted = m_sparse->insert (std::make_pair (x * m_height + y, Type())).first;
|
||||
typename Map::iterator inserted = m_sparse->insert
|
||||
(std::make_pair (coord(x,y,z), Type())).first;
|
||||
return inserted->second;
|
||||
}
|
||||
|
||||
return (*m_raw)[x * m_height + y];
|
||||
return (*m_raw)[coord(x,y,z)];
|
||||
}
|
||||
const Type& operator() (const std::size_t& x, const std::size_t& y) const
|
||||
const Type& operator() (const std::size_t& x, const std::size_t& y, const std::size_t& z = 0) const
|
||||
{
|
||||
if (m_raw == boost::shared_ptr<Vector>()) // sparse case
|
||||
{
|
||||
typename Map::iterator found = m_sparse->find (x * m_height + y);
|
||||
typename Map::iterator found = m_sparse->find (coord(x,y,z));
|
||||
if (found != m_sparse->end())
|
||||
return found->second;
|
||||
return m_default;
|
||||
}
|
||||
|
||||
return (*m_raw)[x * m_height + y];
|
||||
return (*m_raw)[coord(x,y,z)];
|
||||
}
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -50,6 +50,10 @@ public:
|
|||
Label (std::string name) : m_name (name) { }
|
||||
|
||||
const std::string& name() const { return m_name; }
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
void set_name (const std::string& name) { m_name = name; }
|
||||
/// \endcond
|
||||
};
|
||||
|
||||
#ifdef DOXYGEN_RUNNING
|
||||
|
|
|
|||
|
|
@ -36,6 +36,9 @@
|
|||
#include <CGAL/Classification/Feature/Verticality.h>
|
||||
#include <CGAL/Classification/Feature/Eigenvalue.h>
|
||||
#include <CGAL/Classification/Feature/Color_channel.h>
|
||||
#include <CGAL/Classification/Feature/Height_below.h>
|
||||
#include <CGAL/Classification/Feature/Height_above.h>
|
||||
#include <CGAL/Classification/Feature/Vertical_range.h>
|
||||
#include <CGAL/Classification/internal/verbosity.h>
|
||||
|
||||
#include <CGAL/bounding_box.h>
|
||||
|
|
@ -65,15 +68,20 @@ namespace Classification {
|
|||
\brief Generates a set of generic features for surface mesh
|
||||
classification.
|
||||
|
||||
This class takes care of computing all necessary data structures and
|
||||
of generating a set of generic features at multiple scales to
|
||||
increase the reliability of the classification.
|
||||
This class takes care of computing and storing all necessary data
|
||||
structures and of generating a set of generic features at multiple
|
||||
scales to increase the reliability of the classification.
|
||||
|
||||
A `PointMap` is required: this map should associate each face of the
|
||||
mesh to a representative point (for example, the center of mass of
|
||||
the face). It is used to generate point set features by considering
|
||||
the mesh as a point set.
|
||||
|
||||
\warning The generated features use data structures that are stored
|
||||
inside the generator. For this reason, the generator should be
|
||||
instantiated _within the same scope_ as the feature set and should
|
||||
not be deleted before the feature set.
|
||||
|
||||
\tparam GeomTraits model of \cgal Kernel.
|
||||
\tparam FaceListGraph model of `FaceListGraph`.
|
||||
\tparam PointMap model of `ReadablePropertyMap` whose key type is
|
||||
|
|
@ -134,6 +142,12 @@ public:
|
|||
<Face_range, PointMap> Distance_to_plane;
|
||||
typedef Classification::Feature::Elevation
|
||||
<GeomTraits, Face_range, PointMap> Elevation;
|
||||
typedef Classification::Feature::Height_below
|
||||
<GeomTraits, Face_range, PointMap> Height_below;
|
||||
typedef Classification::Feature::Height_above
|
||||
<GeomTraits, Face_range, PointMap> Height_above;
|
||||
typedef Classification::Feature::Vertical_range
|
||||
<GeomTraits, Face_range, PointMap> Vertical_range;
|
||||
typedef Classification::Feature::Vertical_dispersion
|
||||
<GeomTraits, Face_range, PointMap> Dispersion;
|
||||
typedef Classification::Feature::Verticality
|
||||
|
|
@ -212,8 +226,8 @@ private:
|
|||
}
|
||||
|
||||
float grid_resolution() const { return voxel_size; }
|
||||
float radius_neighbors() const { return voxel_size * 5; }
|
||||
float radius_dtm() const { return voxel_size * 100; }
|
||||
float radius_neighbors() const { return voxel_size * 3; }
|
||||
float radius_dtm() const { return voxel_size * 10; }
|
||||
|
||||
};
|
||||
|
||||
|
|
@ -327,7 +341,10 @@ public:
|
|||
|
||||
- `CGAL::Classification::Feature::Distance_to_plane`
|
||||
- `CGAL::Classification::Feature::Elevation`
|
||||
- `CGAL::Classification::Feature::Height_above`
|
||||
- `CGAL::Classification::Feature::Height_below`
|
||||
- `CGAL::Classification::Feature::Vertical_dispersion`
|
||||
- `CGAL::Classification::Feature::Vertical_range`
|
||||
|
||||
\param features the feature set where the features are instantiated.
|
||||
*/
|
||||
|
|
@ -339,6 +356,12 @@ public:
|
|||
features.add_with_scale_id<Dispersion> (i, m_range, m_point_map, grid(i), radius_neighbors(i));
|
||||
for (std::size_t i = 0; i < m_scales.size(); ++ i)
|
||||
features.add_with_scale_id<Elevation> (i, m_range, m_point_map, grid(i), radius_dtm(i));
|
||||
for (std::size_t i = 0; i < m_scales.size(); ++ i)
|
||||
features.add_with_scale_id<Height_below> (i, m_range, m_point_map, grid(i));
|
||||
for (std::size_t i = 0; i < m_scales.size(); ++ i)
|
||||
features.add_with_scale_id<Height_above> (i, m_range, m_point_map, grid(i));
|
||||
for (std::size_t i = 0; i < m_scales.size(); ++ i)
|
||||
features.add_with_scale_id<Vertical_range> (i, m_range, m_point_map, grid(i));
|
||||
}
|
||||
|
||||
/// @}
|
||||
|
|
|
|||
|
|
@ -37,16 +37,18 @@ namespace CGAL {
|
|||
|
||||
namespace Classification {
|
||||
|
||||
/*!
|
||||
\ingroup PkgClassificationClassifiers
|
||||
namespace OpenCV {
|
||||
|
||||
\brief %Classifier based on the OpenCV version of random forest algorithm.
|
||||
/*!
|
||||
\ingroup PkgClassificationClassifiersOpenCV
|
||||
|
||||
\brief %Classifier based on the OpenCV version of the random forest algorithm.
|
||||
|
||||
\note This class requires the \ref thirdpartyOpenCV library.
|
||||
|
||||
\cgalModels `CGAL::Classification::Classifier`
|
||||
*/
|
||||
class OpenCV_random_forest_classifier
|
||||
class Random_forest_classifier
|
||||
{
|
||||
const Label_set& m_labels;
|
||||
const Feature_set& m_features;
|
||||
|
|
@ -68,7 +70,7 @@ public:
|
|||
/// @{
|
||||
|
||||
/*!
|
||||
\brief Instantiate the classifier using the sets of `labels` and `features`.
|
||||
\brief Instantiates the classifier using the sets of `labels` and `features`.
|
||||
|
||||
Parameters documentation is copy-pasted from [the official documentation of OpenCV](http://docs.opencv.org/2.4/modules/ml/doc/random_trees.html). For more details on this method, please refer to it.
|
||||
|
||||
|
|
@ -80,13 +82,13 @@ public:
|
|||
\param max_number_of_trees_in_the_forest The maximum number of trees in the forest (surprise, surprise). Typically the more trees you have the better the accuracy. However, the improvement in accuracy generally diminishes and asymptotes pass a certain number of trees. Also to keep in mind, the number of tree increases the prediction time linearly.
|
||||
\param forest_accuracy Sufficient accuracy (OOB error).
|
||||
*/
|
||||
OpenCV_random_forest_classifier (const Label_set& labels,
|
||||
const Feature_set& features,
|
||||
int max_depth = 20,
|
||||
int min_sample_count = 5,
|
||||
int max_categories = 15,
|
||||
int max_number_of_trees_in_the_forest = 100,
|
||||
float forest_accuracy = 0.01f)
|
||||
Random_forest_classifier (const Label_set& labels,
|
||||
const Feature_set& features,
|
||||
int max_depth = 20,
|
||||
int min_sample_count = 5,
|
||||
int max_categories = 15,
|
||||
int max_number_of_trees_in_the_forest = 100,
|
||||
float forest_accuracy = 0.01f)
|
||||
: m_labels (labels), m_features (features),
|
||||
m_max_depth (max_depth), m_min_sample_count (min_sample_count),
|
||||
m_max_categories (max_categories),
|
||||
|
|
@ -98,7 +100,7 @@ public:
|
|||
{ }
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
~OpenCV_random_forest_classifier ()
|
||||
~Random_forest_classifier ()
|
||||
{
|
||||
#if (CV_MAJOR_VERSION < 3)
|
||||
if (rtree != NULL)
|
||||
|
|
@ -298,6 +300,13 @@ public:
|
|||
|
||||
}
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
// Backward compatibility
|
||||
typedef OpenCV::Random_forest_classifier OpenCV_random_forest_classifier;
|
||||
/// \endcond
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
#endif // CGAL_CLASSIFICATION_OPENCV_RANDOM_FOREST_CLASSIFIER_H
|
||||
|
|
@ -35,6 +35,9 @@
|
|||
#include <CGAL/Classification/Feature/Verticality.h>
|
||||
#include <CGAL/Classification/Feature/Eigenvalue.h>
|
||||
#include <CGAL/Classification/Feature/Color_channel.h>
|
||||
#include <CGAL/Classification/Feature/Height_below.h>
|
||||
#include <CGAL/Classification/Feature/Height_above.h>
|
||||
#include <CGAL/Classification/Feature/Vertical_range.h>
|
||||
|
||||
// Experimental feature, not used officially
|
||||
#ifdef CGAL_CLASSIFICATION_USE_GRADIENT_OF_FEATURE
|
||||
|
|
@ -67,9 +70,14 @@ namespace Classification {
|
|||
\brief Generates a set of generic features for point set
|
||||
classification.
|
||||
|
||||
This class takes care of computing all necessary data structures and
|
||||
of generating a set of generic features at multiple scales to
|
||||
increase the reliability of the classification.
|
||||
This class takes care of computing and storing all necessary data
|
||||
structures and of generating a set of generic features at multiple
|
||||
scales to increase the reliability of the classification.
|
||||
|
||||
\warning The generated features use data structures that are stored
|
||||
inside the generator. For this reason, the generator should be
|
||||
instantiated _within the same scope_ as the feature set and should
|
||||
not be deleted before the feature set.
|
||||
|
||||
\tparam GeomTraits model of \cgal Kernel.
|
||||
\tparam PointRange model of `ConstRange`. Its iterator type is
|
||||
|
|
@ -128,6 +136,12 @@ public:
|
|||
<PointRange, PointMap> Distance_to_plane;
|
||||
typedef Classification::Feature::Elevation
|
||||
<GeomTraits, PointRange, PointMap> Elevation;
|
||||
typedef Classification::Feature::Height_below
|
||||
<GeomTraits, PointRange, PointMap> Height_below;
|
||||
typedef Classification::Feature::Height_above
|
||||
<GeomTraits, PointRange, PointMap> Height_above;
|
||||
typedef Classification::Feature::Vertical_range
|
||||
<GeomTraits, PointRange, PointMap> Vertical_range;
|
||||
typedef Classification::Feature::Vertical_dispersion
|
||||
<GeomTraits, PointRange, PointMap> Dispersion;
|
||||
typedef Classification::Feature::Verticality
|
||||
|
|
@ -166,7 +180,7 @@ private:
|
|||
neighborhood = new Neighborhood (input, point_map, voxel_size);
|
||||
t.stop();
|
||||
|
||||
if (voxel_size < 0.)
|
||||
if (lower_grid == NULL)
|
||||
CGAL_CLASSIFICATION_CERR << "Neighborhood computed in " << t.time() << " second(s)" << std::endl;
|
||||
else
|
||||
CGAL_CLASSIFICATION_CERR << "Neighborhood with voxel size " << voxel_size
|
||||
|
|
@ -216,8 +230,8 @@ private:
|
|||
}
|
||||
|
||||
float grid_resolution() const { return voxel_size; }
|
||||
float radius_neighbors() const { return voxel_size * 5; }
|
||||
float radius_dtm() const { return voxel_size * 100; }
|
||||
float radius_neighbors() const { return voxel_size * 3; }
|
||||
float radius_dtm() const { return voxel_size * 10; }
|
||||
|
||||
};
|
||||
|
||||
|
|
@ -365,7 +379,10 @@ public:
|
|||
- `CGAL::Classification::Feature::Eigenvalue` with indices 0, 1 and 2
|
||||
- `CGAL::Classification::Feature::Distance_to_plane`
|
||||
- `CGAL::Classification::Feature::Elevation`
|
||||
- `CGAL::Classification::Feature::Height_above`
|
||||
- `CGAL::Classification::Feature::Height_below`
|
||||
- `CGAL::Classification::Feature::Vertical_dispersion`
|
||||
- `CGAL::Classification::Feature::Vertical_range`
|
||||
- The version of `CGAL::Classification::Feature::Verticality` based on eigenvalues
|
||||
|
||||
\param features the feature set where the features are instantiated.
|
||||
|
|
@ -381,6 +398,12 @@ public:
|
|||
features.add_with_scale_id<Dispersion> (i, m_input, m_point_map, grid(i), radius_neighbors(i));
|
||||
for (std::size_t i = 0; i < m_scales.size(); ++ i)
|
||||
features.add_with_scale_id<Elevation> (i, m_input, m_point_map, grid(i), radius_dtm(i));
|
||||
for (std::size_t i = 0; i < m_scales.size(); ++ i)
|
||||
features.add_with_scale_id<Height_below> (i, m_input, m_point_map, grid(i));
|
||||
for (std::size_t i = 0; i < m_scales.size(); ++ i)
|
||||
features.add_with_scale_id<Height_above> (i, m_input, m_point_map, grid(i));
|
||||
for (std::size_t i = 0; i < m_scales.size(); ++ i)
|
||||
features.add_with_scale_id<Vertical_range> (i, m_input, m_point_map, grid(i));
|
||||
for (std::size_t i = 0; i < m_scales.size(); ++ i)
|
||||
features.add_with_scale_id<Verticality> (i, m_input, eigen(i));
|
||||
}
|
||||
|
|
|
|||
|
|
@ -173,7 +173,7 @@ public:
|
|||
|
||||
/*!
|
||||
|
||||
\brief Instantiate the classifier using the sets of `labels` and `features`.
|
||||
\brief Instantiates the classifier using the sets of `labels` and `features`.
|
||||
|
||||
\note If the label set of the feature set are modified after
|
||||
instantiating this object (addition of removal of a label and/or of
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load Diff
|
|
@ -87,6 +87,53 @@ namespace internal {
|
|||
|
||||
};
|
||||
|
||||
template <typename Classifier, typename LabelIndexRange, typename ProbabilitiesRanges>
|
||||
class Classify_detailed_output_functor
|
||||
{
|
||||
const Label_set& m_labels;
|
||||
const Classifier& m_classifier;
|
||||
LabelIndexRange& m_out;
|
||||
ProbabilitiesRanges& m_prob;
|
||||
|
||||
public:
|
||||
|
||||
Classify_detailed_output_functor (const Label_set& labels,
|
||||
const Classifier& classifier,
|
||||
LabelIndexRange& out,
|
||||
ProbabilitiesRanges& prob)
|
||||
: m_labels (labels), m_classifier (classifier), m_out (out), m_prob (prob)
|
||||
{ }
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TBB
|
||||
void operator()(const tbb::blocked_range<std::size_t>& r) const
|
||||
{
|
||||
for (std::size_t s = r.begin(); s != r.end(); ++ s)
|
||||
apply(s);
|
||||
}
|
||||
#endif // CGAL_LINKED_WITH_TBB
|
||||
|
||||
inline void apply (std::size_t s) const
|
||||
{
|
||||
std::size_t nb_class_best=0;
|
||||
std::vector<float> values;
|
||||
m_classifier (s, values);
|
||||
|
||||
float val_class_best = 0.f;
|
||||
for(std::size_t k = 0; k < m_labels.size(); ++ k)
|
||||
{
|
||||
m_prob[k][s] = values[k];
|
||||
if(val_class_best < values[k])
|
||||
{
|
||||
val_class_best = values[k];
|
||||
nb_class_best = k;
|
||||
}
|
||||
}
|
||||
|
||||
m_out[s] = static_cast<typename LabelIndexRange::iterator::value_type>(nb_class_best);
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
template <typename Classifier>
|
||||
class Classify_functor_local_smoothing_preprocessing
|
||||
{
|
||||
|
|
@ -323,8 +370,6 @@ namespace internal {
|
|||
const Classifier& classifier,
|
||||
LabelIndexRange& output)
|
||||
{
|
||||
output.resize(input.size());
|
||||
|
||||
internal::Classify_functor<Classifier, LabelIndexRange>
|
||||
f (labels, classifier, output);
|
||||
|
||||
|
|
@ -344,6 +389,39 @@ namespace internal {
|
|||
}
|
||||
}
|
||||
|
||||
/// \cond SKIP_IN_MANUAL
|
||||
// variant to get a detailed output (not documented yet)
|
||||
template <typename ConcurrencyTag,
|
||||
typename ItemRange,
|
||||
typename Classifier,
|
||||
typename LabelIndexRange,
|
||||
typename ProbabilitiesRanges>
|
||||
void classify (const ItemRange& input,
|
||||
const Label_set& labels,
|
||||
const Classifier& classifier,
|
||||
LabelIndexRange& output,
|
||||
ProbabilitiesRanges& probabilities)
|
||||
{
|
||||
internal::Classify_detailed_output_functor<Classifier, LabelIndexRange, ProbabilitiesRanges>
|
||||
f (labels, classifier, output, probabilities);
|
||||
|
||||
#ifndef CGAL_LINKED_WITH_TBB
|
||||
CGAL_static_assertion_msg (!(boost::is_convertible<ConcurrencyTag, Parallel_tag>::value),
|
||||
"Parallel_tag is enabled but TBB is unavailable.");
|
||||
#else
|
||||
if (boost::is_convertible<ConcurrencyTag,Parallel_tag>::value)
|
||||
{
|
||||
tbb::parallel_for(tbb::blocked_range<size_t>(0, input.size ()), f);
|
||||
}
|
||||
else
|
||||
#endif
|
||||
{
|
||||
for (std::size_t i = 0; i < input.size(); ++ i)
|
||||
f.apply(i);
|
||||
}
|
||||
}
|
||||
/// \endcond
|
||||
|
||||
/*!
|
||||
\ingroup PkgClassificationMain
|
||||
|
||||
|
|
@ -388,8 +466,6 @@ namespace internal {
|
|||
const NeighborQuery& neighbor_query,
|
||||
LabelIndexRange& output)
|
||||
{
|
||||
output.resize(input.size());
|
||||
|
||||
std::vector<std::vector<float> > values
|
||||
(labels.size(), std::vector<float> (input.size(), -1.));
|
||||
internal::Classify_functor_local_smoothing_preprocessing<Classifier>
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ int main (int, char**)
|
|||
color_map, echo_map);
|
||||
|
||||
assert (generator.number_of_scales() == 5);
|
||||
assert (features.size() == 44);
|
||||
assert (features.size() == 59);
|
||||
|
||||
Label_set labels;
|
||||
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ typedef Classification::Feature_handle
|
|||
typedef Classification::Label_set Label_set;
|
||||
typedef Classification::Feature_set Feature_set;
|
||||
|
||||
typedef Classification::ETHZ_random_forest_classifier Classifier;
|
||||
typedef Classification::ETHZ::Random_forest_classifier Classifier;
|
||||
|
||||
typedef Classification::Planimetric_grid<Kernel, Point_set, Point_map> Planimetric_grid;
|
||||
typedef Classification::Point_set_neighborhood<Kernel, Point_set, Point_map> Neighborhood;
|
||||
|
|
@ -87,13 +87,18 @@ int main (int, char**)
|
|||
std::ifstream inf ("output_config.gz", std::ios::binary);
|
||||
classifier2.load_configuration(inf);
|
||||
|
||||
std::vector<std::size_t> label_indices;
|
||||
std::vector<std::size_t> label_indices_2;
|
||||
Classifier classifier3 (classifier, features);
|
||||
|
||||
std::vector<std::size_t> label_indices (points.size());
|
||||
std::vector<std::size_t> label_indices_2 (points.size());
|
||||
std::vector<std::size_t> label_indices_3 (points.size());
|
||||
|
||||
Classification::classify<CGAL::Sequential_tag> (points, labels, classifier, label_indices);
|
||||
Classification::classify<CGAL::Sequential_tag> (points, labels, classifier2, label_indices_2);
|
||||
Classification::classify<CGAL::Sequential_tag> (points, labels, classifier3, label_indices_3);
|
||||
|
||||
assert (label_indices == label_indices_2);
|
||||
|
||||
assert (label_indices == label_indices_3);
|
||||
|
||||
return EXIT_SUCCESS;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -91,7 +91,7 @@ int main (int, char**)
|
|||
#endif
|
||||
|
||||
assert (generator.number_of_scales() == 5);
|
||||
assert (features.size() == 44);
|
||||
assert (features.size() == 59);
|
||||
|
||||
Label_set labels;
|
||||
|
||||
|
|
|
|||
|
|
@ -593,6 +593,22 @@ In \cgal, \sc{OpenCV} is used by the \ref PkgClassificationRef package.
|
|||
|
||||
The \sc{OpenCV} web site is <A HREF="http://opencv.org/">`http://opencv.org/`</A>.
|
||||
|
||||
\subsection thirdpartyTensorFlow TensorFlow
|
||||
|
||||
\sc{TensorFlow} is a library designed for machine learning and deep learning.
|
||||
|
||||
In \cgal, the C++ API of \sc{TensorFlow} is used by the \ref
|
||||
PkgClassificationRef package for neural network. The C++ API can be
|
||||
compiled using CMake: it is distributed as part of the official
|
||||
package and is located in `tensorflow/contrib/cmake`. Be sure to
|
||||
enable and compile the following targets:
|
||||
|
||||
- `tensorflow_BUILD_ALL_KERNELS`
|
||||
- `tensorflow_BUILD_PYTHON_BINDINGS`
|
||||
- `tensorflow_BUILD_SHARED_LIB`.
|
||||
|
||||
The \sc{TensorFlow} web site is <A HREF="https://www.tensorflow.org/">`https://www.tensorflow.org/`</A>.
|
||||
|
||||
\subsection thirdpartyMETIS METIS
|
||||
|
||||
\sc{METIS} is a library developed by the <A HREF="http://glaros.dtc.umn.edu/gkhome/">Karypis Lab</A>
|
||||
|
|
|
|||
|
|
@ -51,6 +51,24 @@ Release date: March 2019
|
|||
original behavior (using one unique and automatically selected seed) is
|
||||
kept if this parameter is not used.
|
||||
|
||||
### Classification
|
||||
|
||||
- Added a new experimental classifier
|
||||
`TensorFlow::Neural_network_classifier`.
|
||||
|
||||
- For uniformity, `ETHZ_random_forest_classifier` is renamed
|
||||
`ETHZ::Random_forest_classifier` and `OpenCV_random_forest_classifier`
|
||||
is renamed `OpenCV::Random_forest_classifier`.
|
||||
|
||||
- The training algorithm of `ETHZ::Random_forest_classifier` was
|
||||
parallelized.
|
||||
|
||||
- Added a constructor to copy a `ETHZ::Random_forest_classifier` using a
|
||||
different data set as input.
|
||||
|
||||
- Added 3 new geometric features, `Height_above`, `Height_below` and
|
||||
`Vertical_range`.
|
||||
|
||||
### 3D Fast Intersection and Distance Computation
|
||||
|
||||
- The primitives `AABB_face_graph_triangle_primitive` and
|
||||
|
|
|
|||
|
|
@ -0,0 +1,25 @@
|
|||
include(FindPackageHandleStandardArgs)
|
||||
|
||||
unset(TENSORFLOW_FOUND)
|
||||
|
||||
find_path(TensorFlow_INCLUDE_DIR
|
||||
NAMES
|
||||
tensorflow/core
|
||||
tensorflow/cc
|
||||
third_party
|
||||
HINTS
|
||||
/usr/include/
|
||||
/usr/local/include/)
|
||||
|
||||
find_library(TensorFlow_LIBRARY NAMES tensorflow_all
|
||||
HINTS
|
||||
/usr/lib
|
||||
/usr/local/lib)
|
||||
|
||||
find_package_handle_standard_args(TensorFlow DEFAULT_MSG TensorFlow_INCLUDE_DIR TensorFlow_LIBRARY)
|
||||
|
||||
if(TENSORFLOW_FOUND)
|
||||
set(TensorFlow_LIBRARIES ${TensorFlow_LIBRARY})
|
||||
set(TensorFlow_INCLUDE_DIRS ${TensorFlow_INCLUDE_DIR})
|
||||
endif()
|
||||
|
||||
|
|
@ -1,4 +1,5 @@
|
|||
set(list_of_whitelisted_headers_txt [=[
|
||||
CGAL/Classification/TensorFlow/Neural_network_classifier.h
|
||||
CGAL/Linear_cell_complex_constructors.h
|
||||
CGAL/CGAL_Ipelet_base.h
|
||||
CGAL/IO/read_las_points.h
|
||||
|
|
|
|||
|
|
@ -359,6 +359,11 @@ add_executable ( CGAL_Mesh_3 Mesh_3.cpp )
|
|||
add_dependencies(CGAL_Mesh_3 Mesh_3)
|
||||
target_link_libraries( CGAL_Mesh_3 PRIVATE polyhedron_demo )
|
||||
add_to_cached_list( CGAL_EXECUTABLE_TARGETS CGAL_Mesh_3 )
|
||||
|
||||
add_executable ( CGAL_Classification Classification.cpp )
|
||||
add_dependencies(CGAL_Classification Classification)
|
||||
target_link_libraries( CGAL_Classification PRIVATE polyhedron_demo )
|
||||
add_to_cached_list( CGAL_EXECUTABLE_TARGETS CGAL_Classification )
|
||||
#
|
||||
# Exporting
|
||||
#
|
||||
|
|
|
|||
|
|
@ -0,0 +1,29 @@
|
|||
#include "Polyhedron_demo.h"
|
||||
#include <clocale>
|
||||
#include <CGAL/Qt/resources.h>
|
||||
#include <QSurfaceFormat>
|
||||
|
||||
|
||||
/*!
|
||||
* \brief Defines the entry point of the demo.
|
||||
* Creates the application and sets a main window.
|
||||
*/
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
QSurfaceFormat fmt;
|
||||
|
||||
fmt.setVersion(4, 3);
|
||||
fmt.setRenderableType(QSurfaceFormat::OpenGL);
|
||||
fmt.setProfile(QSurfaceFormat::CoreProfile);
|
||||
fmt.setOption(QSurfaceFormat::DebugContext);
|
||||
QSurfaceFormat::setDefaultFormat(fmt);
|
||||
QStringList keywords;
|
||||
keywords << "Classification";
|
||||
Polyhedron_demo app(argc, argv,
|
||||
"Classification demo",
|
||||
"CGAL Classification Demo",
|
||||
keywords);
|
||||
//We set the locale to avoid any trouble with VTK
|
||||
std::setlocale(LC_ALL, "C");
|
||||
return app.try_exec();
|
||||
}
|
||||
|
|
@ -20,7 +20,7 @@ if(EIGEN3_FOUND)
|
|||
|
||||
if (Boost_SERIALIZATION_FOUND AND Boost_IOSTREAMS_FOUND AND (NOT WIN32 OR Boost_ZLIB_FOUND))
|
||||
qt5_wrap_ui( classificationUI_FILES Classification_widget.ui Classification_advanced_widget.ui )
|
||||
polyhedron_demo_plugin(classification_plugin Classification_plugin Point_set_item_classification Cluster_classification Surface_mesh_item_classification ${classificationUI_FILES})
|
||||
polyhedron_demo_plugin(classification_plugin Classification_plugin Point_set_item_classification Cluster_classification Surface_mesh_item_classification ${classificationUI_FILES} KEYWORDS Classification)
|
||||
|
||||
set(classification_linked_libraries ${classification_linked_libraries}
|
||||
${Boost_SERIALIZATION_LIBRARY}
|
||||
|
|
@ -37,9 +37,22 @@ if(EIGEN3_FOUND)
|
|||
else()
|
||||
message(STATUS "NOTICE: OpenCV was not found. OpenCV random forest predicate for classification won't be available.")
|
||||
endif()
|
||||
|
||||
find_package(TensorFlow QUIET)
|
||||
if (TensorFlow_FOUND)
|
||||
message(STATUS "Found TensorFlow")
|
||||
set(classification_linked_libraries ${classification_linked_libraries}
|
||||
${TensorFlow_LIBRARY})
|
||||
set(classification_compile_definitions ${classification_compile_definitions}
|
||||
"-DCGAL_LINKED_WITH_TENSORFLOW")
|
||||
include_directories( ${TensorFlow_INCLUDE_DIR} )
|
||||
else()
|
||||
message(STATUS "NOTICE: TensorFlow not found, Neural Network predicate for classification won't be available.")
|
||||
endif()
|
||||
|
||||
target_link_libraries(classification_plugin PUBLIC scene_points_with_normal_item
|
||||
scene_polylines_item scene_polygon_soup_item scene_surface_mesh_item scene_selection_item scene_color_ramp ${classification_linked_libraries})
|
||||
add_dependencies(classification_plugin point_set_selection_plugin selection_plugin)
|
||||
target_compile_definitions(classification_plugin PUBLIC ${classification_compile_definitions})
|
||||
else()
|
||||
message(STATUS "NOTICE: Boost Serialization or IO Streams or ZLIB not found. Classification plugin won't be available.")
|
||||
|
|
|
|||
|
|
@ -33,6 +33,7 @@
|
|||
#include <QMainWindow>
|
||||
#include <QApplication>
|
||||
#include <QCheckBox>
|
||||
#include <QRadioButton>
|
||||
#include <QInputDialog>
|
||||
#include <QMessageBox>
|
||||
#include <QSpinBox>
|
||||
|
|
@ -44,6 +45,11 @@
|
|||
#include <boost/graph/adjacency_list.hpp>
|
||||
#include <CGAL/boost/graph/split_graph_into_polylines.h>
|
||||
|
||||
#define CGAL_CLASSIFICATION_ETHZ_ID "Random Forest (ETHZ)"
|
||||
#define CGAL_CLASSIFICATION_TENSORFLOW_ID "Neural Network (TensorFlow)"
|
||||
#define CGAL_CLASSIFICATION_OPENCV_ID "Random Forest (OpenCV)"
|
||||
#define CGAL_CLASSIFICATION_SOWF_ID "Sum of Weighted Features"
|
||||
|
||||
using namespace CGAL::Three;
|
||||
|
||||
class Polyhedron_demo_classification_plugin :
|
||||
|
|
@ -121,6 +127,7 @@ class Polyhedron_demo_classification_plugin :
|
|||
color_button->setStyleSheet(s);
|
||||
color_button->update();
|
||||
}
|
||||
|
||||
};
|
||||
|
||||
|
||||
|
|
@ -186,82 +193,87 @@ public:
|
|||
addDockWidget(dock_widget);
|
||||
addDockWidget(dock_widget_adv);
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
ui_widget.classifier->addItem (tr("Random Forest (OpenCV %1.%2)")
|
||||
.arg(CV_MAJOR_VERSION)
|
||||
.arg(CV_MINOR_VERSION));
|
||||
#endif
|
||||
|
||||
color_att = QColor (75, 75, 77);
|
||||
|
||||
ui_widget.menu->setMenu (new QMenu("Classification Menu", ui_widget.menu));
|
||||
|
||||
connect(ui_widget.classifier, SIGNAL(currentIndexChanged(int)), this,
|
||||
SLOT(on_classifier_changed(int)));
|
||||
|
||||
QAction* compute_features = ui_widget.menu->menu()->addAction ("Compute features");
|
||||
QAction* compute_features = ui_widget.features_menu->addAction ("Compute features...");
|
||||
connect(compute_features, SIGNAL(triggered()), this,
|
||||
SLOT(on_compute_features_button_clicked()));
|
||||
|
||||
ui_widget.menu->menu()->addSection ("Training");
|
||||
action_statistics = ui_widget.features_menu->addAction ("Show feature statistics");
|
||||
connect(action_statistics, SIGNAL(triggered()), this,
|
||||
SLOT(on_statistics_clicked()));
|
||||
|
||||
action_train = ui_widget.menu->menu()->addAction ("Train classifier");
|
||||
action_train->setShortcut(Qt::SHIFT | Qt::Key_T);
|
||||
connect(action_train, SIGNAL(triggered()), this,
|
||||
SLOT(on_train_clicked()));
|
||||
|
||||
action_reset_local = ui_widget.menu->menu()->addAction ("Reset training set of selection");
|
||||
action_reset_local = ui_widget.training_menu->addAction ("Reset training set of selection");
|
||||
connect(action_reset_local, SIGNAL(triggered()), this,
|
||||
SLOT(on_reset_training_set_of_selection_clicked()));
|
||||
|
||||
action_reset = ui_widget.menu->menu()->addAction ("Reset all training sets");
|
||||
action_reset = ui_widget.training_menu->addAction ("Reset all training sets");
|
||||
connect(action_reset, SIGNAL(triggered()), this,
|
||||
SLOT(on_reset_training_sets_clicked()));
|
||||
|
||||
action_random_region = ui_widget.menu->menu()->addAction ("Select random region");
|
||||
action_random_region = ui_widget.training_menu->addAction ("Select random region");
|
||||
action_random_region->setShortcut(Qt::SHIFT | Qt::Key_S);
|
||||
connect(action_random_region, SIGNAL(triggered()), this,
|
||||
SLOT(on_select_random_region_clicked()));
|
||||
|
||||
action_validate = ui_widget.menu->menu()->addAction ("Validate labels of current selection as training sets");
|
||||
action_validate = ui_widget.training_menu->addAction ("Validate labels of current selection as training sets");
|
||||
connect(action_validate, SIGNAL(triggered()), this,
|
||||
SLOT(on_validate_selection_clicked()));
|
||||
|
||||
action_save_config = ui_widget.menu->menu()->addAction ("Save classifier's current configuration");
|
||||
action_load_config = ui_widget.menu->menu()->addAction ("Load configuration for classifier");
|
||||
classifier = ui_widget.classifier_menu->addSection (CGAL_CLASSIFICATION_ETHZ_ID);
|
||||
|
||||
action_train = ui_widget.classifier_menu->addAction ("Train...");
|
||||
action_train->setShortcut(Qt::SHIFT | Qt::Key_T);
|
||||
connect(action_train, SIGNAL(triggered()), this,
|
||||
SLOT(on_train_clicked()));
|
||||
|
||||
ui_widget.classifier_menu->addSeparator();
|
||||
|
||||
action_run = ui_widget.classifier_menu->addAction ("Classify");
|
||||
connect(action_run, SIGNAL(triggered()), this,
|
||||
SLOT(on_run_button_clicked()));
|
||||
|
||||
action_run_smoothed = ui_widget.classifier_menu->addAction ("Classify with local smoothing...");
|
||||
connect(action_run_smoothed, SIGNAL(triggered()), this,
|
||||
SLOT(on_run_smoothed_button_clicked()));
|
||||
|
||||
action_run_graphcut = ui_widget.classifier_menu->addAction ("Classify with Graph Cut...");
|
||||
connect(action_run_graphcut, SIGNAL(triggered()), this,
|
||||
SLOT(on_run_graphcut_button_clicked()));
|
||||
|
||||
ui_widget.classifier_menu->addSeparator();
|
||||
|
||||
action_save_config = ui_widget.classifier_menu->addAction ("Save current configuration...");
|
||||
action_load_config = ui_widget.classifier_menu->addAction ("Load configuration...");
|
||||
connect(action_save_config, SIGNAL(triggered()), this,
|
||||
SLOT(on_save_config_button_clicked()));
|
||||
connect(action_load_config, SIGNAL(triggered()), this,
|
||||
SLOT(on_load_config_button_clicked()));
|
||||
|
||||
ui_widget.menu->menu()->addSection ("Algorithms");
|
||||
|
||||
action_run = ui_widget.menu->menu()->addAction ("Classification");
|
||||
connect(action_run, SIGNAL(triggered()), this,
|
||||
SLOT(on_run_button_clicked()));
|
||||
|
||||
action_run_smoothed = ui_widget.menu->menu()->addAction ("Classification with local smoothing");
|
||||
connect(action_run_smoothed, SIGNAL(triggered()), this,
|
||||
SLOT(on_run_smoothed_button_clicked()));
|
||||
|
||||
action_run_graphcut = ui_widget.menu->menu()->addAction ("Classification with Graph Cut");
|
||||
connect(action_run_graphcut, SIGNAL(triggered()), this,
|
||||
SLOT(on_run_graphcut_button_clicked()));
|
||||
|
||||
ui_widget.menu->menu()->addSeparator();
|
||||
|
||||
QAction* close = ui_widget.menu->menu()->addAction ("Close");
|
||||
connect(close, SIGNAL(triggered()), this,
|
||||
SLOT(ask_for_closing()));
|
||||
|
||||
ui_widget.classifier_menu->addSeparator();
|
||||
|
||||
QAction* switch_classifier = ui_widget.classifier_menu->addAction ("Switch to another classifier...");
|
||||
connect(switch_classifier, SIGNAL(triggered()), this,
|
||||
SLOT(on_switch_classifier_clicked()));
|
||||
|
||||
connect(ui_widget.display, SIGNAL(currentIndexChanged(int)), this,
|
||||
SLOT(on_display_button_clicked(int)));
|
||||
|
||||
connect(ui_widget.minDisplay, SIGNAL(released()), this,
|
||||
SLOT(on_min_display_button_clicked()));
|
||||
connect(ui_widget.maxDisplay, SIGNAL(released()), this,
|
||||
SLOT(on_max_display_button_clicked()));
|
||||
|
||||
connect(ui_widget_adv.selected_feature, SIGNAL(currentIndexChanged(int)), this,
|
||||
SLOT(on_selected_feature_changed(int)));
|
||||
connect(ui_widget_adv.feature_weight, SIGNAL(valueChanged(int)), this,
|
||||
SLOT(on_feature_weight_changed(int)));
|
||||
|
||||
connect(ui_widget.help, SIGNAL(clicked()), this,
|
||||
SLOT(on_help_clicked()));
|
||||
connect(ui_widget.close, SIGNAL(clicked()), this,
|
||||
SLOT(ask_for_closing()));
|
||||
|
||||
QObject* scene_obj = dynamic_cast<QObject*>(scene_interface);
|
||||
if(scene_obj)
|
||||
{
|
||||
|
|
@ -297,10 +309,26 @@ public Q_SLOTS:
|
|||
dock_widget->raise();
|
||||
if (Scene_points_with_normal_item* points_item
|
||||
= qobject_cast<Scene_points_with_normal_item*>(scene->item(scene->mainSelectionIndex())))
|
||||
{
|
||||
create_from_item(points_item);
|
||||
QAction* ps_selection = mw->findChild<QAction*>("actionPointSetSelection");
|
||||
if (ps_selection)
|
||||
ps_selection->trigger();
|
||||
else
|
||||
print_message("Warning: can't find Point Set Selection plugin");
|
||||
}
|
||||
else if (Scene_surface_mesh_item* mesh_item
|
||||
= qobject_cast<Scene_surface_mesh_item*>(scene->item(scene->mainSelectionIndex())))
|
||||
{
|
||||
create_from_item(mesh_item);
|
||||
QAction* sm_selection = mw->findChild<QAction*>("actionSelection");
|
||||
if (sm_selection)
|
||||
sm_selection->trigger();
|
||||
else
|
||||
print_message("Warning: can't find Surface Mesh Selection plugin");
|
||||
}
|
||||
|
||||
on_help_clicked();
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -343,31 +371,39 @@ public Q_SLOTS:
|
|||
|
||||
void disable_everything ()
|
||||
{
|
||||
ui_widget.menu->setEnabled(false);
|
||||
ui_widget.display->setEnabled(false);
|
||||
ui_widget.classifier->setEnabled(false);
|
||||
ui_widget.features_menu->setEnabled(false);
|
||||
ui_widget.training_menu->setEnabled(false);
|
||||
ui_widget.classifier_menu->setEnabled(false);
|
||||
ui_widget.view->setEnabled(false);
|
||||
ui_widget.frame->setEnabled(false);
|
||||
}
|
||||
|
||||
void enable_computation()
|
||||
{
|
||||
ui_widget.menu->setEnabled(true);
|
||||
ui_widget.features_menu->setEnabled(true);
|
||||
ui_widget.training_menu->setEnabled(true);
|
||||
ui_widget.classifier_menu->setEnabled(false);
|
||||
action_statistics->setEnabled(false);
|
||||
action_train->setEnabled(false);
|
||||
action_reset_local->setEnabled(false);
|
||||
action_reset->setEnabled(false);
|
||||
action_random_region->setEnabled(false);
|
||||
action_validate->setEnabled(false);
|
||||
action_reset_local->setEnabled(true);
|
||||
action_reset->setEnabled(true);
|
||||
action_random_region->setEnabled(true);
|
||||
action_validate->setEnabled(true);
|
||||
action_save_config->setEnabled(false);
|
||||
action_load_config->setEnabled(false);
|
||||
action_run->setEnabled(false);
|
||||
action_run_smoothed->setEnabled(false);
|
||||
action_run_graphcut->setEnabled(false);
|
||||
ui_widget.display->setEnabled(true);
|
||||
ui_widget.classifier->setEnabled(true);
|
||||
ui_widget.view->setEnabled(true);
|
||||
ui_widget.frame->setEnabled(true);
|
||||
}
|
||||
|
||||
void enable_classif()
|
||||
{
|
||||
ui_widget.features_menu->setEnabled(true);
|
||||
ui_widget.training_menu->setEnabled(true);
|
||||
ui_widget.classifier_menu->setEnabled(true);
|
||||
action_statistics->setEnabled(true);
|
||||
action_train->setEnabled(true);
|
||||
action_reset_local->setEnabled(true);
|
||||
action_reset->setEnabled(true);
|
||||
|
|
@ -421,9 +457,15 @@ public Q_SLOTS:
|
|||
ui_widget_adv.selected_feature->clear();
|
||||
classif->fill_display_combo_box(ui_widget.display, ui_widget_adv.selected_feature);
|
||||
if (index >= ui_widget.display->count())
|
||||
{
|
||||
ui_widget.display->setCurrentIndex(1);
|
||||
change_color (classif, 1);
|
||||
}
|
||||
else
|
||||
{
|
||||
ui_widget.display->setCurrentIndex(index);
|
||||
change_color (classif, index);
|
||||
}
|
||||
ui_widget_adv.selected_feature->setCurrentIndex(0);
|
||||
}
|
||||
}
|
||||
|
|
@ -449,12 +491,6 @@ public Q_SLOTS:
|
|||
dynamic_cast<Surface_mesh_item_classification*>(it->second)->set_selection_item(selection_item);
|
||||
return it->second;
|
||||
}
|
||||
else if (Scene_points_with_normal_item* points_item
|
||||
= qobject_cast<Scene_points_with_normal_item*>(item))
|
||||
return create_from_item(points_item);
|
||||
else if (Scene_surface_mesh_item* mesh_item
|
||||
= qobject_cast<Scene_surface_mesh_item*>(item))
|
||||
return create_from_item(mesh_item);
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
|
@ -505,25 +541,68 @@ public Q_SLOTS:
|
|||
update_plugin_from_item(classif);
|
||||
return classif;
|
||||
}
|
||||
|
||||
int get_classifier ()
|
||||
{
|
||||
if (classifier->text() == QString(CGAL_CLASSIFICATION_ETHZ_ID))
|
||||
return 1;
|
||||
if (classifier->text() == QString(CGAL_CLASSIFICATION_TENSORFLOW_ID))
|
||||
return 3;
|
||||
if (classifier->text() == QString(CGAL_CLASSIFICATION_OPENCV_ID))
|
||||
return 2;
|
||||
if (classifier->text() == QString(CGAL_CLASSIFICATION_SOWF_ID))
|
||||
return 0;
|
||||
|
||||
std::cerr << "Error: unknown classifier" << std::endl;
|
||||
return -1;
|
||||
}
|
||||
|
||||
void run (Item_classification_base* classif, int method,
|
||||
std::size_t subdivisions = 1,
|
||||
double smoothing = 0.5)
|
||||
{
|
||||
classif->run (method, ui_widget.classifier->currentIndex(), subdivisions, smoothing);
|
||||
classif->run (method, get_classifier(), subdivisions, smoothing);
|
||||
}
|
||||
|
||||
void on_classifier_changed (int index)
|
||||
void on_help_clicked()
|
||||
{
|
||||
if (index == 0)
|
||||
{
|
||||
dock_widget_adv->show();
|
||||
dock_widget_adv->raise();
|
||||
}
|
||||
else
|
||||
dock_widget_adv->hide();
|
||||
QMessageBox::information(dock_widget, QString("Classification"),
|
||||
QString("Classification\n"
|
||||
"\n"
|
||||
"Welcome to CGAL Classification! Please read carefully this notice\n"
|
||||
"before using the plugin.\n"
|
||||
"\n"
|
||||
"[QUICK INTRODUCTION]\n"
|
||||
"\n"
|
||||
"In order to classify, you need to perform the following steps:\n"
|
||||
"\n"
|
||||
"1. Compute the features\n"
|
||||
"2. Set up the labels (ground, vegetation, etc.) that you need\n"
|
||||
"3. Select a training set for each of these labels\n"
|
||||
"4. Train the classifier\n"
|
||||
"\n"
|
||||
"You can then either select more inliers for training and train again\n"
|
||||
"to improve the results, classify with or without regularization or\n"
|
||||
"save the classifier's configuration.\n"
|
||||
"\n"
|
||||
"When loading a classifier's configuration, the computed features\n"
|
||||
"should be the same (same number of scales, etc.) and the labels should\n"
|
||||
"be the same as when the classifier's configuration was saved.\n"
|
||||
"\n"
|
||||
"For more information, please refer to the CGAL manual.\n"
|
||||
"\n"
|
||||
"[IMPORTANT NOTICE ON SAVING CLASSIFIED ITEMS]\n"
|
||||
"\n"
|
||||
"If you intend to save the file after classifying, PLEASE CLOSE THE\n"
|
||||
"CLASSIFICATION PLUGIN FIRST: for visualization, colors are saved in\n"
|
||||
"the point set. If you do not close the classification plugin, colors\n"
|
||||
"will be saved and might overwrite existing colors of the point cloud.\n"
|
||||
"\n"
|
||||
"Classification results will be saved if you use the PLY or LAS\n"
|
||||
"formats. Training will be saved if you use the PLY format.\n"));
|
||||
|
||||
}
|
||||
|
||||
|
||||
void on_compute_features_button_clicked()
|
||||
{
|
||||
Item_classification_base* classif
|
||||
|
|
@ -534,17 +613,26 @@ public Q_SLOTS:
|
|||
return;
|
||||
}
|
||||
|
||||
bool ok = false;
|
||||
int nb_scales = QInputDialog::getInt((QWidget*)mw,
|
||||
tr("Compute Features"), // dialog title
|
||||
tr("Number of scales:"), // field label
|
||||
5, 1, 99, 1, &ok);
|
||||
if (!ok)
|
||||
QMultipleInputDialog dialog ("Compute Features", mw);
|
||||
QSpinBox* scales = dialog.add<QSpinBox> ("Number of scales:");
|
||||
scales->setRange (1, 99);
|
||||
scales->setValue (5);
|
||||
|
||||
QDoubleSpinBox* voxel_size = dialog.add<QDoubleSpinBox> ("Voxel size (0 for automatic):");
|
||||
voxel_size->setRange (0.0, 10000.0);
|
||||
voxel_size->setValue (0.0);
|
||||
voxel_size->setSingleStep (0.01);
|
||||
|
||||
if (dialog.exec() != QDialog::Accepted)
|
||||
return;
|
||||
|
||||
QApplication::setOverrideCursor(Qt::WaitCursor);
|
||||
|
||||
classif->compute_features (std::size_t(nb_scales));
|
||||
float vsize = float(voxel_size->value());
|
||||
if (vsize == 0.f)
|
||||
vsize = -1.f; // auto value
|
||||
|
||||
classif->compute_features (std::size_t(scales->value()), vsize);
|
||||
|
||||
update_plugin_from_item(classif);
|
||||
QApplication::restoreOverrideCursor();
|
||||
|
|
@ -563,18 +651,19 @@ public Q_SLOTS:
|
|||
|
||||
QString filename;
|
||||
|
||||
if (ui_widget.classifier->currentIndex() == 0)
|
||||
int classifier = get_classifier();
|
||||
if (classifier == 0) // Sum of Weighted Featuers
|
||||
filename = QFileDialog::getSaveFileName(mw,
|
||||
tr("Save classification configuration"),
|
||||
tr("%1 (CGAL classif config).xml").arg(classif->item()->name()),
|
||||
"CGAL classification configuration (*.xml);;");
|
||||
else if (ui_widget.classifier->currentIndex() == 1)
|
||||
else if (classifier == 1) // Random Forest (ETHZ)
|
||||
filename = QFileDialog::getSaveFileName(mw,
|
||||
tr("Save classification configuration"),
|
||||
tr("%1 (ETHZ random forest config).gz").arg(classif->item()->name()),
|
||||
"Compressed ETHZ random forest configuration (*.gz);;");
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
else if (ui_widget.classifier->currentIndex() == 2)
|
||||
else if (classifier == 2) // Random Forest (OpenCV)
|
||||
filename = QFileDialog::getSaveFileName(mw,
|
||||
tr("Save classification configuration"),
|
||||
tr("%1 (OpenCV %2.%3 random forest config).xml")
|
||||
|
|
@ -585,6 +674,13 @@ public Q_SLOTS:
|
|||
.arg(CV_MAJOR_VERSION)
|
||||
.arg(CV_MINOR_VERSION));
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
else if (classifier == 3) // Neural Network (TensorFlow)
|
||||
filename = QFileDialog::getSaveFileName(mw,
|
||||
tr("Save classification configuration"),
|
||||
tr("%1 (CGAL Neural Network config).xml").arg(classif->item()->name()),
|
||||
"CGAL TensorFlow Neural Network classification configuration (*.xml);;");
|
||||
#endif
|
||||
|
||||
if (filename == QString())
|
||||
return;
|
||||
|
|
@ -592,8 +688,7 @@ public Q_SLOTS:
|
|||
|
||||
QApplication::setOverrideCursor(Qt::WaitCursor);
|
||||
|
||||
classif->save_config (filename.toStdString().c_str(),
|
||||
ui_widget.classifier->currentIndex());
|
||||
classif->save_config (filename.toStdString().c_str(), classifier);
|
||||
|
||||
QApplication::restoreOverrideCursor();
|
||||
|
||||
|
|
@ -610,18 +705,19 @@ public Q_SLOTS:
|
|||
}
|
||||
QString filename;
|
||||
|
||||
if (ui_widget.classifier->currentIndex() == 0)
|
||||
int classifier = get_classifier();
|
||||
if (classifier == 0) // SOWF
|
||||
filename = QFileDialog::getOpenFileName(mw,
|
||||
tr("Open CGAL classification configuration"),
|
||||
".",
|
||||
"CGAL classification configuration (*.xml);;All Files (*)");
|
||||
else if (ui_widget.classifier->currentIndex() == 1)
|
||||
else if (classifier == 1) // ETHZ
|
||||
filename = QFileDialog::getOpenFileName(mw,
|
||||
tr("Open ETHZ random forest configuration"),
|
||||
".",
|
||||
"Compressed ETHZ random forest configuration (*.gz);;All Files (*)");
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
else if (ui_widget.classifier->currentIndex() == 2)
|
||||
else if (classifier == 2) // OpenCV
|
||||
filename = QFileDialog::getOpenFileName(mw,
|
||||
tr("Open OpenCV %2.%3 random forest configuration")
|
||||
.arg(CV_MAJOR_VERSION)
|
||||
|
|
@ -631,14 +727,21 @@ public Q_SLOTS:
|
|||
.arg(CV_MAJOR_VERSION)
|
||||
.arg(CV_MINOR_VERSION));
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
else if (classifier == 3) // TensorFlow
|
||||
filename = QFileDialog::getOpenFileName(mw,
|
||||
tr("Open CGAL Neural Network classification configuration"),
|
||||
".",
|
||||
tr("CGAL Neural Network classification configuration (*.xml);;All Files (*)"));
|
||||
#endif
|
||||
|
||||
if (filename == QString())
|
||||
return;
|
||||
|
||||
QApplication::setOverrideCursor(Qt::WaitCursor);
|
||||
|
||||
classif->load_config (filename.toStdString().c_str(),
|
||||
ui_widget.classifier->currentIndex());
|
||||
classif->load_config (filename.toStdString().c_str(), classifier);
|
||||
|
||||
update_plugin_from_item(classif);
|
||||
run (classif, 0);
|
||||
|
||||
|
|
@ -646,7 +749,32 @@ public Q_SLOTS:
|
|||
item_changed(classif->item());
|
||||
}
|
||||
|
||||
void change_color (Item_classification_base* classif, int index)
|
||||
{
|
||||
float vmin = std::numeric_limits<float>::infinity();
|
||||
float vmax = std::numeric_limits<float>::infinity();
|
||||
|
||||
classif->change_color (index, &vmin, &vmax);
|
||||
|
||||
if (vmin == std::numeric_limits<float>::infinity() || vmax == std::numeric_limits<float>::infinity())
|
||||
{
|
||||
ui_widget.minDisplay->setEnabled(false);
|
||||
ui_widget.minDisplay->setText("Min");
|
||||
ui_widget.maxDisplay->setEnabled(false);
|
||||
ui_widget.maxDisplay->setText("Max");
|
||||
}
|
||||
else
|
||||
{
|
||||
ui_widget.minDisplay->setEnabled(true);
|
||||
ui_widget.minDisplay->setText(tr("Min (%1)").arg(vmin));
|
||||
ui_widget.maxDisplay->setEnabled(true);
|
||||
ui_widget.maxDisplay->setText(tr("Max (%1)").arg(vmax));
|
||||
}
|
||||
|
||||
item_changed(classif->item());
|
||||
}
|
||||
|
||||
|
||||
void on_display_button_clicked(int index)
|
||||
{
|
||||
Item_classification_base* classif
|
||||
|
|
@ -654,7 +782,87 @@ public Q_SLOTS:
|
|||
if(!classif)
|
||||
return;
|
||||
|
||||
classif->change_color (index);
|
||||
change_color (classif, index);
|
||||
}
|
||||
|
||||
float display_button_value (QPushButton* button)
|
||||
{
|
||||
std::string text = button->text().toStdString();
|
||||
|
||||
std::size_t pos1 = text.find('(');
|
||||
if (pos1 == std::string::npos)
|
||||
return std::numeric_limits<float>::infinity();
|
||||
std::size_t pos2 = text.find(')');
|
||||
if (pos2 == std::string::npos)
|
||||
return std::numeric_limits<float>::infinity();
|
||||
|
||||
std::string fstring (text.begin() + pos1 + 1,
|
||||
text.begin() + pos2);
|
||||
|
||||
return float (std::atof(fstring.c_str()));
|
||||
}
|
||||
|
||||
void on_min_display_button_clicked()
|
||||
{
|
||||
Item_classification_base* classif
|
||||
= get_classification();
|
||||
if(!classif)
|
||||
return;
|
||||
|
||||
float vmin = display_button_value (ui_widget.minDisplay);
|
||||
float vmax = display_button_value (ui_widget.maxDisplay);
|
||||
|
||||
if (vmin == std::numeric_limits<float>::infinity()
|
||||
|| vmax == std::numeric_limits<float>::infinity())
|
||||
return;
|
||||
|
||||
bool ok = false;
|
||||
vmin = float(QInputDialog::getDouble((QWidget*)mw,
|
||||
tr("Set display ramp minimum value (saturate under):"),
|
||||
tr("Minimum value (pale blue):"),
|
||||
double(vmin),
|
||||
-10000000.0,
|
||||
double(vmax), 5, &ok));
|
||||
if (!ok)
|
||||
return;
|
||||
|
||||
int index = ui_widget.display->currentIndex();
|
||||
|
||||
classif->change_color (index, &vmin, &vmax);
|
||||
ui_widget.minDisplay->setText(tr("Min* (%1)").arg(vmin));
|
||||
|
||||
item_changed(classif->item());
|
||||
}
|
||||
|
||||
void on_max_display_button_clicked()
|
||||
{
|
||||
Item_classification_base* classif
|
||||
= get_classification();
|
||||
if(!classif)
|
||||
return;
|
||||
|
||||
float vmin = display_button_value (ui_widget.minDisplay);
|
||||
float vmax = display_button_value (ui_widget.maxDisplay);
|
||||
|
||||
if (vmin == std::numeric_limits<float>::infinity()
|
||||
|| vmax == std::numeric_limits<float>::infinity())
|
||||
return;
|
||||
|
||||
bool ok = false;
|
||||
vmax = float(QInputDialog::getDouble((QWidget*)mw,
|
||||
tr("Set display ramp maximum value (saturate over):"),
|
||||
tr("Maximum value (dark red):"),
|
||||
double(vmax),
|
||||
double(vmin),
|
||||
10000000.0, 5, &ok));
|
||||
if (!ok)
|
||||
return;
|
||||
|
||||
int index = ui_widget.display->currentIndex();
|
||||
|
||||
classif->change_color (index, &vmin, &vmax);
|
||||
ui_widget.maxDisplay->setText(tr("Max* (%1)").arg(vmax));
|
||||
|
||||
item_changed(classif->item());
|
||||
}
|
||||
|
||||
|
|
@ -846,6 +1054,7 @@ public Q_SLOTS:
|
|||
add_new_label (classif, n);
|
||||
|
||||
add_label_button();
|
||||
update_plugin_from_item(classif);
|
||||
}
|
||||
|
||||
void on_use_config_building_clicked()
|
||||
|
|
@ -1066,6 +1275,68 @@ public Q_SLOTS:
|
|||
(bbox.zmin() + bbox.zmax()) / 2.) + offset);
|
||||
}
|
||||
|
||||
void on_statistics_clicked()
|
||||
{
|
||||
Item_classification_base* classif
|
||||
= get_classification();
|
||||
if(!classif)
|
||||
{
|
||||
print_message("Error: there is no point set classification item!");
|
||||
return;
|
||||
}
|
||||
|
||||
QApplication::setOverrideCursor(Qt::WaitCursor);
|
||||
std::string str = classif->feature_statistics();
|
||||
QApplication::restoreOverrideCursor();
|
||||
|
||||
QMultipleInputDialog dialog ("Feature Statistics", mw);
|
||||
QLabel* text = dialog.add<QLabel> ("");
|
||||
text->setText(str.c_str());
|
||||
dialog.exec_no_cancel();
|
||||
}
|
||||
|
||||
void on_switch_classifier_clicked()
|
||||
{
|
||||
QMultipleInputDialog dialog ("Which classifier do you want to use?", mw);
|
||||
|
||||
QRadioButton* ethz = dialog.add<QRadioButton> (CGAL_CLASSIFICATION_ETHZ_ID);
|
||||
ethz->setChecked(true);
|
||||
|
||||
QRadioButton* sowf = dialog.add<QRadioButton> (CGAL_CLASSIFICATION_SOWF_ID);
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
QRadioButton* tensorflow = dialog.add<QRadioButton> (CGAL_CLASSIFICATION_TENSORFLOW_ID);
|
||||
#endif
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
QRadioButton* opencv = dialog.add<QRadioButton> (CGAL_CLASSIFICATION_OPENCV_ID);
|
||||
#endif
|
||||
|
||||
if (dialog.exec() != QDialog::Accepted)
|
||||
return;
|
||||
|
||||
if (ethz->isChecked())
|
||||
classifier->setText(CGAL_CLASSIFICATION_ETHZ_ID);
|
||||
else if (sowf->isChecked())
|
||||
classifier->setText(CGAL_CLASSIFICATION_SOWF_ID);
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
else if (tensorflow->isChecked())
|
||||
classifier->setText(CGAL_CLASSIFICATION_TENSORFLOW_ID);
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
else if (opencv->isChecked())
|
||||
classifier->setText(CGAL_CLASSIFICATION_OPENCV_ID);
|
||||
#endif
|
||||
|
||||
if (sowf->isChecked())
|
||||
{
|
||||
dock_widget_adv->show();
|
||||
dock_widget_adv->raise();
|
||||
}
|
||||
else
|
||||
dock_widget_adv->hide();
|
||||
}
|
||||
|
||||
void on_train_clicked()
|
||||
{
|
||||
Item_classification_base* classif
|
||||
|
|
@ -1076,41 +1347,48 @@ public Q_SLOTS:
|
|||
return;
|
||||
}
|
||||
|
||||
int nb_trials = 0;
|
||||
int num_trees = 0;
|
||||
int max_depth = 0;
|
||||
|
||||
if (ui_widget.classifier->currentIndex() == 0)
|
||||
QMultipleInputDialog dialog ("Train Classifier", mw);
|
||||
|
||||
int classifier = get_classifier();
|
||||
if (classifier == 0) // SOWF
|
||||
{
|
||||
bool ok = false;
|
||||
nb_trials = QInputDialog::getInt((QWidget*)mw,
|
||||
tr("Train Classifier"), // dialog title
|
||||
tr("Number of trials:"), // field label
|
||||
800, 1, 99999, 50, &ok);
|
||||
if (!ok)
|
||||
return;
|
||||
QSpinBox* trials = dialog.add<QSpinBox> ("Number of trials: ", "trials");
|
||||
trials->setRange (1, 99999);
|
||||
trials->setValue (800);
|
||||
}
|
||||
else
|
||||
else if (classifier == 1 || classifier == 2) // random forest
|
||||
{
|
||||
QMultipleInputDialog dialog ("Train Random Forest Classifier", mw);
|
||||
QSpinBox* trees = dialog.add<QSpinBox> ("Number of trees: ");
|
||||
QSpinBox* trees = dialog.add<QSpinBox> ("Number of trees: ", "num_trees");
|
||||
trees->setRange (1, 9999);
|
||||
trees->setValue (25);
|
||||
QSpinBox* depth = dialog.add<QSpinBox> ("Maximum depth of tree: ");
|
||||
QSpinBox* depth = dialog.add<QSpinBox> ("Maximum depth of tree: ", "max_depth");
|
||||
depth->setRange (1, 9999);
|
||||
depth->setValue (20);
|
||||
|
||||
if (dialog.exec() != QDialog::Accepted)
|
||||
return;
|
||||
num_trees = trees->value();
|
||||
max_depth = depth->value();
|
||||
}
|
||||
else if (classifier == 3) // neural network
|
||||
{
|
||||
QSpinBox* trials = dialog.add<QSpinBox> ("Number of trials: ", "trials");
|
||||
trials->setRange (1, 99999);
|
||||
trials->setValue (500);
|
||||
QDoubleSpinBox* rate = dialog.add<QDoubleSpinBox> ("Learning rate: ", "learning_rate");
|
||||
rate->setRange (0.00001, 10000.0);
|
||||
rate->setValue (0.001);
|
||||
rate->setDecimals (5);
|
||||
QSpinBox* batch = dialog.add<QSpinBox> ("Batch size: ", "batch_size");
|
||||
batch->setRange (1, 2000000000);
|
||||
batch->setValue (1000);
|
||||
dialog.add<QLineEdit> ("Hidden layer size(s): ", "hidden_layers");
|
||||
QCheckBox* restart = dialog.add<QCheckBox> ("Restart from scratch: ", "restart");
|
||||
restart->setChecked (false);
|
||||
}
|
||||
|
||||
if (dialog.exec() != QDialog::Accepted)
|
||||
return;
|
||||
|
||||
QApplication::setOverrideCursor(Qt::WaitCursor);
|
||||
CGAL::Real_timer t;
|
||||
t.start();
|
||||
classif->train(ui_widget.classifier->currentIndex(), nb_trials,
|
||||
num_trees, max_depth);
|
||||
classif->train(classifier, dialog);
|
||||
t.stop();
|
||||
std::cerr << "Done in " << t.time() << " second(s)" << std::endl;
|
||||
QApplication::restoreOverrideCursor();
|
||||
|
|
@ -1171,6 +1449,10 @@ public Q_SLOTS:
|
|||
connect(change_color, SIGNAL(triggered()), this,
|
||||
SLOT(on_color_changed_clicked()));
|
||||
|
||||
QAction* change_name = label_buttons.back().menu->addAction ("Change name");
|
||||
connect(change_name, SIGNAL(triggered()), this,
|
||||
SLOT(on_name_changed_clicked()));
|
||||
|
||||
QAction* create = label_buttons.back().menu->addAction ("Create point set item from labeled points");
|
||||
connect(create, SIGNAL(triggered()), this,
|
||||
SLOT(on_create_point_set_item()));
|
||||
|
|
@ -1245,7 +1527,7 @@ public Q_SLOTS:
|
|||
label_buttons.erase (label_buttons.begin() + position);
|
||||
add_label_button();
|
||||
}
|
||||
|
||||
update_plugin_from_item(classif);
|
||||
item_changed(classif->item());
|
||||
}
|
||||
|
||||
|
|
@ -1271,7 +1553,11 @@ public Q_SLOTS:
|
|||
int position = row_index * 3 + column_index;
|
||||
|
||||
QColor color = label_buttons[position].color;
|
||||
color = QColorDialog::getColor(color, (QWidget*)mw, "Change of color of label");
|
||||
color = QColorDialog::getColor(color, (QWidget*)mw, "Change color of label");
|
||||
|
||||
if (!color.isValid())
|
||||
return;
|
||||
|
||||
label_buttons[position].change_color (color);
|
||||
classif->change_label_color (position,
|
||||
color);
|
||||
|
|
@ -1280,6 +1566,47 @@ public Q_SLOTS:
|
|||
item_changed(classif->item());
|
||||
}
|
||||
|
||||
void on_name_changed_clicked()
|
||||
{
|
||||
Item_classification_base* classif
|
||||
= get_classification();
|
||||
if(!classif)
|
||||
{
|
||||
print_message("Error: there is no point set classification item!");
|
||||
return;
|
||||
}
|
||||
|
||||
QPushButton* label_clicked = qobject_cast<QPushButton*>(QObject::sender()->parent()->parent());
|
||||
if (label_clicked == NULL)
|
||||
std::cerr << "Error" << std::endl;
|
||||
else
|
||||
{
|
||||
int index = ui_widget.labelGrid->indexOf(label_clicked);
|
||||
int row_index, column_index, row_span, column_span;
|
||||
ui_widget.labelGrid->getItemPosition(index, &row_index, &column_index, &row_span, &column_span);
|
||||
|
||||
int position = row_index * 3 + column_index;
|
||||
|
||||
bool ok;
|
||||
QString name =
|
||||
QInputDialog::getText((QWidget*)mw,
|
||||
tr("Change name of label"), // dialog title
|
||||
tr("New name:"), // field label
|
||||
QLineEdit::Normal,
|
||||
classif->label(position)->name().c_str(),
|
||||
&ok);
|
||||
|
||||
if (!ok)
|
||||
return;
|
||||
|
||||
classif->change_label_name (position, name.toStdString());
|
||||
}
|
||||
|
||||
update_plugin_from_item(classif);
|
||||
classif->update_color ();
|
||||
item_changed(classif->item());
|
||||
}
|
||||
|
||||
void on_add_selection_to_training_set_clicked()
|
||||
{
|
||||
Item_classification_base* classif
|
||||
|
|
@ -1312,10 +1639,7 @@ public Q_SLOTS:
|
|||
Item_classification_base* classif
|
||||
= get_classification();
|
||||
if(!classif)
|
||||
{
|
||||
print_message("Error: there is no point set classification item!");
|
||||
return;
|
||||
}
|
||||
|
||||
if (classif->number_of_features() <= (std::size_t)v)
|
||||
return;
|
||||
|
|
@ -1408,16 +1732,19 @@ private:
|
|||
|
||||
QDockWidget* dock_widget;
|
||||
QDockWidget* dock_widget_adv;
|
||||
QAction* action_train;
|
||||
QAction* action_statistics;
|
||||
QAction* action_reset_local;
|
||||
QAction* action_reset;
|
||||
QAction* action_random_region;
|
||||
QAction* action_validate;
|
||||
QAction* action_save_config;
|
||||
QAction* action_load_config;
|
||||
|
||||
QAction* classifier;
|
||||
QAction* action_train;
|
||||
QAction* action_run;
|
||||
QAction* action_run_smoothed;
|
||||
QAction* action_run_graphcut;
|
||||
QAction* action_save_config;
|
||||
QAction* action_load_config;
|
||||
|
||||
std::vector<LabelButton> label_buttons;
|
||||
QPushButton* label_button;
|
||||
|
|
|
|||
|
|
@ -6,8 +6,8 @@
|
|||
<rect>
|
||||
<x>0</x>
|
||||
<y>0</y>
|
||||
<width>289</width>
|
||||
<height>154</height>
|
||||
<width>411</width>
|
||||
<height>194</height>
|
||||
</rect>
|
||||
</property>
|
||||
<property name="sizePolicy">
|
||||
|
|
@ -21,112 +21,151 @@
|
|||
</property>
|
||||
<widget class="QWidget" name="dockWidgetContents">
|
||||
<layout class="QVBoxLayout" name="verticalLayout_2">
|
||||
<item>
|
||||
<layout class="QHBoxLayout" name="horizontalLayout_7">
|
||||
<item>
|
||||
<widget class="QPushButton" name="menu">
|
||||
<property name="font">
|
||||
<font>
|
||||
<weight>75</weight>
|
||||
<bold>true</bold>
|
||||
</font>
|
||||
</property>
|
||||
<property name="text">
|
||||
<string>Classification</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="Line" name="line">
|
||||
<property name="orientation">
|
||||
<enum>Qt::Vertical</enum>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QLabel" name="label">
|
||||
<property name="sizePolicy">
|
||||
<sizepolicy hsizetype="Maximum" vsizetype="Preferred">
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="text">
|
||||
<string>View:</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QComboBox" name="display">
|
||||
<property name="currentIndex">
|
||||
<number>0</number>
|
||||
</property>
|
||||
<item>
|
||||
<property name="text">
|
||||
<string>Real colors</string>
|
||||
</property>
|
||||
</item>
|
||||
<item>
|
||||
<property name="text">
|
||||
<string>Classification</string>
|
||||
</property>
|
||||
</item>
|
||||
<item>
|
||||
<property name="text">
|
||||
<string>Training sets</string>
|
||||
</property>
|
||||
</item>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
</item>
|
||||
<item>
|
||||
<layout class="QHBoxLayout" name="horizontalLayout">
|
||||
<item>
|
||||
<widget class="QLabel" name="label_7">
|
||||
<property name="sizePolicy">
|
||||
<sizepolicy hsizetype="Maximum" vsizetype="Preferred">
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
<widget class="QMenuBar" name="menubar">
|
||||
<widget class="QMenu" name="features_menu">
|
||||
<property name="title">
|
||||
<string>Features</string>
|
||||
</property>
|
||||
</widget>
|
||||
<widget class="QMenu" name="training_menu">
|
||||
<property name="title">
|
||||
<string>Training Sets</string>
|
||||
</property>
|
||||
</widget>
|
||||
<widget class="QMenu" name="classifier_menu">
|
||||
<property name="title">
|
||||
<string>Classifier</string>
|
||||
</property>
|
||||
</widget>
|
||||
<addaction name="features_menu"/>
|
||||
<addaction name="training_menu"/>
|
||||
<addaction name="classifier_menu"/>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<spacer name="horizontalSpacer_2">
|
||||
<property name="orientation">
|
||||
<enum>Qt::Horizontal</enum>
|
||||
</property>
|
||||
<property name="sizeHint" stdset="0">
|
||||
<size>
|
||||
<width>40</width>
|
||||
<height>20</height>
|
||||
</size>
|
||||
</property>
|
||||
</spacer>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QPushButton" name="help">
|
||||
<property name="text">
|
||||
<string>Classifier:</string>
|
||||
<string/>
|
||||
</property>
|
||||
<property name="icon">
|
||||
<iconset resource="../../Polyhedron_3.qrc">
|
||||
<normaloff>:/cgal/icons/resources/help_button.png</normaloff>:/cgal/icons/resources/help_button.png</iconset>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QComboBox" name="classifier">
|
||||
<property name="currentIndex">
|
||||
<number>1</number>
|
||||
<widget class="QPushButton" name="close">
|
||||
<property name="text">
|
||||
<string/>
|
||||
</property>
|
||||
<property name="icon">
|
||||
<iconset resource="../../Polyhedron_3.qrc">
|
||||
<normaloff>:/cgal/icons/check-off.png</normaloff>:/cgal/icons/check-off.png</iconset>
|
||||
</property>
|
||||
<item>
|
||||
<property name="text">
|
||||
<string>Sum of Weighted Features</string>
|
||||
</property>
|
||||
</item>
|
||||
<item>
|
||||
<property name="text">
|
||||
<string>Random Forest (ETHZ)</string>
|
||||
</property>
|
||||
</item>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QFrame" name="frame">
|
||||
<widget class="QGroupBox" name="view">
|
||||
<property name="title">
|
||||
<string>View</string>
|
||||
</property>
|
||||
<layout class="QHBoxLayout" name="horizontalLayout_3">
|
||||
<item>
|
||||
<widget class="QComboBox" name="display">
|
||||
<property name="currentIndex">
|
||||
<number>0</number>
|
||||
</property>
|
||||
<item>
|
||||
<property name="text">
|
||||
<string>Real colors</string>
|
||||
</property>
|
||||
</item>
|
||||
<item>
|
||||
<property name="text">
|
||||
<string>Classification</string>
|
||||
</property>
|
||||
</item>
|
||||
<item>
|
||||
<property name="text">
|
||||
<string>Training sets</string>
|
||||
</property>
|
||||
</item>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="Line" name="line_3">
|
||||
<property name="orientation">
|
||||
<enum>Qt::Vertical</enum>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QPushButton" name="minDisplay">
|
||||
<property name="enabled">
|
||||
<bool>false</bool>
|
||||
</property>
|
||||
<property name="text">
|
||||
<string>Min</string>
|
||||
</property>
|
||||
<property name="checkable">
|
||||
<bool>false</bool>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<spacer name="horizontalSpacer">
|
||||
<property name="orientation">
|
||||
<enum>Qt::Horizontal</enum>
|
||||
</property>
|
||||
<property name="sizeHint" stdset="0">
|
||||
<size>
|
||||
<width>65</width>
|
||||
<height>20</height>
|
||||
</size>
|
||||
</property>
|
||||
</spacer>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QPushButton" name="maxDisplay">
|
||||
<property name="enabled">
|
||||
<bool>false</bool>
|
||||
</property>
|
||||
<property name="text">
|
||||
<string>Max</string>
|
||||
</property>
|
||||
</widget>
|
||||
</item>
|
||||
</layout>
|
||||
</widget>
|
||||
</item>
|
||||
<item>
|
||||
<widget class="QGroupBox" name="frame">
|
||||
<property name="sizePolicy">
|
||||
<sizepolicy hsizetype="Expanding" vsizetype="Expanding">
|
||||
<horstretch>0</horstretch>
|
||||
<verstretch>0</verstretch>
|
||||
</sizepolicy>
|
||||
</property>
|
||||
<property name="frameShape">
|
||||
<enum>QFrame::StyledPanel</enum>
|
||||
</property>
|
||||
<property name="frameShadow">
|
||||
<enum>QFrame::Raised</enum>
|
||||
<property name="title">
|
||||
<string>Labels</string>
|
||||
</property>
|
||||
<layout class="QVBoxLayout" name="verticalLayout">
|
||||
<item>
|
||||
|
|
@ -155,6 +194,8 @@
|
|||
</layout>
|
||||
</widget>
|
||||
</widget>
|
||||
<resources/>
|
||||
<resources>
|
||||
<include location="../../Polyhedron_3.qrc"/>
|
||||
</resources>
|
||||
<connections/>
|
||||
</ui>
|
||||
|
|
|
|||
|
|
@ -12,6 +12,8 @@
|
|||
#include <CGAL/Timer.h>
|
||||
#include <CGAL/Memory_sizer.h>
|
||||
|
||||
#include <QLineEdit>
|
||||
|
||||
#include <CGAL/Three/Viewer_interface.h>
|
||||
|
||||
#include <set>
|
||||
|
|
@ -21,6 +23,7 @@
|
|||
|
||||
Cluster_classification::Cluster_classification(Scene_points_with_normal_item* points)
|
||||
: m_points (points)
|
||||
, m_input_is_las (false)
|
||||
{
|
||||
m_index_color = 1;
|
||||
|
||||
|
|
@ -59,6 +62,7 @@ Cluster_classification::Cluster_classification(Scene_points_with_normal_item* po
|
|||
boost::tie (las_classif, las_found) = m_points->point_set()->property_map<unsigned char>("classification");
|
||||
if (las_found)
|
||||
{
|
||||
m_input_is_las = true;
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
|
|
@ -126,20 +130,26 @@ Cluster_classification::Cluster_classification(Scene_points_with_normal_item* po
|
|||
|
||||
if (training_found)
|
||||
{
|
||||
if (las_found && (m_training[*it] == 0 || m_training[*it] == 1)) // Unclassified class in LAS
|
||||
m_training[*it] = -1;
|
||||
else if (m_training[*it] != -1)
|
||||
m_training[*it] = used_indices[std::size_t(m_training[*it])];
|
||||
if (c != -1)
|
||||
if (std::size_t(current_idx) != used_indices.size()) // Empty indices -> reorder indices in point set
|
||||
{
|
||||
if (las_found && (m_training[*it] == 0 || m_training[*it] == 1)) // Unclassified class in LAS
|
||||
m_training[*it] = -1;
|
||||
else if (m_training[*it] != -1)
|
||||
m_training[*it] = used_indices[std::size_t(m_training[*it])];
|
||||
}
|
||||
if (c != -1 && m_training[*it] != -1)
|
||||
m_clusters[c].training() = m_training[*it];
|
||||
}
|
||||
if (classif_found)
|
||||
{
|
||||
if (las_found && (m_classif[*it] == 0 || m_classif[*it] == 1)) // Unclassified class in LAS
|
||||
m_classif[*it] = -1;
|
||||
else if (m_classif[*it] != -1)
|
||||
m_classif[*it] = used_indices[std::size_t(m_classif[*it])];
|
||||
if (c != -1)
|
||||
if (std::size_t(current_idx) != used_indices.size()) // Empty indices -> reorder indices in point set
|
||||
{
|
||||
if (las_found && (m_classif[*it] == 0 || m_classif[*it] == 1)) // Unclassified class in LAS
|
||||
m_classif[*it] = -1;
|
||||
else if (m_classif[*it] != -1)
|
||||
m_classif[*it] = used_indices[std::size_t(m_classif[*it])];
|
||||
}
|
||||
if (c != -1 && m_classif[*it] != -1)
|
||||
m_clusters[c].label() = m_classif[*it];
|
||||
}
|
||||
}
|
||||
|
|
@ -221,9 +231,12 @@ Cluster_classification::Cluster_classification(Scene_points_with_normal_item* po
|
|||
update_comments_of_point_set_item();
|
||||
|
||||
m_sowf = new Sum_of_weighted_features (m_labels, m_features);
|
||||
m_ethz = new ETHZ_random_forest (m_labels, m_features);
|
||||
m_ethz = NULL;
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
m_random_forest = new Random_forest (m_labels, m_features);
|
||||
m_random_forest = NULL;
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
m_neural_network = NULL;
|
||||
#endif
|
||||
|
||||
// Compute neighborhood
|
||||
|
|
@ -272,17 +285,6 @@ Cluster_classification::Cluster_classification(Scene_points_with_normal_item* po
|
|||
|
||||
Cluster_classification::~Cluster_classification()
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
int c = m_cluster_id[*it];
|
||||
if (c != -1)
|
||||
{
|
||||
m_training[*it] = m_clusters[c].training();
|
||||
m_classif[*it] = m_clusters[c].label();
|
||||
}
|
||||
}
|
||||
|
||||
if (m_sowf != NULL)
|
||||
delete m_sowf;
|
||||
if (m_ethz != NULL)
|
||||
|
|
@ -290,9 +292,100 @@ Cluster_classification::~Cluster_classification()
|
|||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
if (m_random_forest != NULL)
|
||||
delete m_random_forest;
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network != NULL)
|
||||
delete m_neural_network;
|
||||
#endif
|
||||
if (m_points != NULL)
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
int c = m_cluster_id[*it];
|
||||
if (c != -1)
|
||||
{
|
||||
m_training[*it] = m_clusters[c].training();
|
||||
m_classif[*it] = m_clusters[c].label();
|
||||
}
|
||||
else
|
||||
{
|
||||
m_training[*it] = -1;
|
||||
m_classif[*it] = -1;
|
||||
}
|
||||
}
|
||||
|
||||
// For LAS saving, convert classification info in the LAS standard
|
||||
// if (m_input_is_las)
|
||||
{
|
||||
Point_set::Property_map<unsigned char> las_classif
|
||||
= m_points->point_set()->add_property_map<unsigned char>("classification", 0).first;
|
||||
|
||||
std::vector<unsigned char> label_indices;
|
||||
|
||||
unsigned char custom = 19;
|
||||
for (std::size_t i = 0; i < m_labels.size(); ++ i)
|
||||
{
|
||||
if (m_labels[i]->name() == "ground")
|
||||
label_indices.push_back (2);
|
||||
else if (m_labels[i]->name() == "low_veget")
|
||||
label_indices.push_back (3);
|
||||
else if (m_labels[i]->name() == "med_veget" || m_labels[i]->name() == "vegetation")
|
||||
label_indices.push_back (4);
|
||||
else if (m_labels[i]->name() == "high_veget")
|
||||
label_indices.push_back (5);
|
||||
else if (m_labels[i]->name() == "building" || m_labels[i]->name() == "roof")
|
||||
label_indices.push_back (6);
|
||||
else if (m_labels[i]->name() == "noise")
|
||||
label_indices.push_back (7);
|
||||
else if (m_labels[i]->name() == "reserved" || m_labels[i]->name() == "facade")
|
||||
label_indices.push_back (8);
|
||||
else if (m_labels[i]->name() == "water")
|
||||
label_indices.push_back (9);
|
||||
else if (m_labels[i]->name() == "rail")
|
||||
label_indices.push_back (10);
|
||||
else if (m_labels[i]->name() == "road_surface")
|
||||
label_indices.push_back (11);
|
||||
else if (m_labels[i]->name() == "reserved_2")
|
||||
label_indices.push_back (12);
|
||||
else if (m_labels[i]->name() == "wire_guard")
|
||||
label_indices.push_back (13);
|
||||
else if (m_labels[i]->name() == "wire_conduct")
|
||||
label_indices.push_back (14);
|
||||
else if (m_labels[i]->name() == "trans_tower")
|
||||
label_indices.push_back (15);
|
||||
else if (m_labels[i]->name() == "wire_connect")
|
||||
label_indices.push_back (16);
|
||||
else if (m_labels[i]->name() == "bridge_deck")
|
||||
label_indices.push_back (17);
|
||||
else if (m_labels[i]->name() == "high_noise")
|
||||
label_indices.push_back (18);
|
||||
else
|
||||
label_indices.push_back (custom ++);
|
||||
}
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->end(); ++ it)
|
||||
{
|
||||
int c = m_classif[*it];
|
||||
unsigned char lc = 1; // unclassified in LAS standard
|
||||
if (c != -1)
|
||||
lc = label_indices[std::size_t(c)];
|
||||
|
||||
las_classif[*it] = lc;
|
||||
|
||||
int t = m_training[*it];
|
||||
unsigned char lt = 1; // unclassified in LAS standard
|
||||
if (t != -1)
|
||||
lt = label_indices[std::size_t(t)];
|
||||
|
||||
m_training[*it] = int(lt);
|
||||
}
|
||||
|
||||
m_points->point_set()->remove_property_map (m_classif);
|
||||
}
|
||||
|
||||
|
||||
reset_colors();
|
||||
erase_item();
|
||||
}
|
||||
|
|
@ -313,17 +406,7 @@ void Cluster_classification::backup_existing_colors_and_add_new()
|
|||
m_points->point_set()->remove_colors();
|
||||
}
|
||||
|
||||
m_red = m_points->point_set()->add_property_map<unsigned char>("red").first;
|
||||
m_green = m_points->point_set()->add_property_map<unsigned char>("green").first;
|
||||
m_blue = m_points->point_set()->add_property_map<unsigned char>("blue").first;
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
m_red[*it] = 0;
|
||||
m_green[*it] = 0;
|
||||
m_blue[*it] = 0;
|
||||
}
|
||||
m_points->point_set()->check_colors();
|
||||
m_points->point_set()->add_colors();
|
||||
}
|
||||
|
||||
void Cluster_classification::reset_colors()
|
||||
|
|
@ -334,40 +417,13 @@ void Cluster_classification::reset_colors()
|
|||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
m_red[*it] = m_color[*it][0];
|
||||
m_green[*it] = m_color[*it][1];
|
||||
m_blue[*it] = m_color[*it][2];
|
||||
}
|
||||
m_points->point_set()->set_color(*it, m_color[*it]);
|
||||
|
||||
m_points->point_set()->remove_property_map(m_color);
|
||||
}
|
||||
}
|
||||
|
||||
// Write point set to .PLY file
|
||||
bool Cluster_classification::write_output(std::ostream& stream)
|
||||
{
|
||||
if (m_features.size() == 0)
|
||||
return false;
|
||||
|
||||
reset_indices();
|
||||
|
||||
stream.precision (std::numeric_limits<double>::digits10 + 2);
|
||||
|
||||
// std::vector<Color> colors;
|
||||
// for (std::size_t i = 0; i < m_labels.size(); ++ i)
|
||||
// {
|
||||
// Color c = {{ (unsigned char)(m_labels[i].second.red()),
|
||||
// (unsigned char)(m_labels[i].second.green()),
|
||||
// (unsigned char)(m_labels[i].second.blue()) }};
|
||||
// colors.push_back (c);
|
||||
// }
|
||||
|
||||
// m_psc->write_classification_to_ply (stream);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
void Cluster_classification::change_color (int index)
|
||||
void Cluster_classification::change_color (int index, float* vmin, float* vmax)
|
||||
{
|
||||
m_index_color = index;
|
||||
|
||||
|
|
@ -377,138 +433,173 @@ void Cluster_classification::change_color (int index)
|
|||
static Color_ramp ramp;
|
||||
ramp.build_rainbow();
|
||||
reset_indices();
|
||||
|
||||
if (index_color == -1) // item color
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
m_red[*it] = 0;
|
||||
m_green[*it] = 0;
|
||||
m_blue[*it] = 0;
|
||||
}
|
||||
}
|
||||
else if (index_color == 0) // real colors
|
||||
{
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
m_red[*it] = m_color[*it][0];
|
||||
m_green[*it] = m_color[*it][1];
|
||||
m_blue[*it] = m_color[*it][2];
|
||||
}
|
||||
}
|
||||
else if (index_color == 1) // classif
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
QColor color (0, 0, 0);
|
||||
int cid = m_cluster_id[*it];
|
||||
if (cid != -1)
|
||||
{
|
||||
std::size_t c = m_clusters[cid].label();
|
||||
|
||||
if (c != std::size_t(-1))
|
||||
color = m_label_colors[c];
|
||||
}
|
||||
|
||||
m_red[*it] = color.red();
|
||||
m_green[*it] = color.green();
|
||||
m_blue[*it] = color.blue();
|
||||
}
|
||||
}
|
||||
else if (index_color == 2) // training
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
QColor color (0, 0, 0);
|
||||
int cid = m_cluster_id[*it];
|
||||
float div = 1;
|
||||
|
||||
if (cid != -1)
|
||||
{
|
||||
int c = m_clusters[cid].training();
|
||||
int c2 = m_clusters[cid].label();
|
||||
|
||||
if (c != -1)
|
||||
color = m_label_colors[std::size_t(c)];
|
||||
|
||||
if (c != c2)
|
||||
div = 2;
|
||||
}
|
||||
m_red[*it] = (color.red() / div);
|
||||
m_green[*it] = (color.green() / div);
|
||||
m_blue[*it] = (color.blue() / div);
|
||||
}
|
||||
}
|
||||
else if (index_color == 3) // clusters
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
int cid = m_cluster_id[*it];
|
||||
|
||||
if (cid != -1)
|
||||
{
|
||||
srand(cid);
|
||||
m_red[*it] = 64 + rand() % 192;
|
||||
m_green[*it] = 64 + rand() % 192;
|
||||
m_blue[*it] = 64 + rand() % 192;
|
||||
}
|
||||
else
|
||||
{
|
||||
m_red[*it] = 0;
|
||||
m_green[*it] = 0;
|
||||
m_blue[*it] = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
m_points->point_set()->remove_colors();
|
||||
else
|
||||
{
|
||||
Feature_handle feature = m_features[index_color - 4];
|
||||
|
||||
float max = 0.;
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
if (!m_points->point_set()->has_colors())
|
||||
m_points->point_set()->add_colors();
|
||||
|
||||
if (index_color == 0) // real colors
|
||||
{
|
||||
int cid = m_cluster_id[*it];
|
||||
if (cid != -1)
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
m_points->point_set()->set_color(*it, m_color[*it]);
|
||||
}
|
||||
else if (index_color == 1) // classif
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
if (feature->value(cid) > max)
|
||||
max = feature->value(cid);
|
||||
QColor color (0, 0, 0);
|
||||
int cid = m_cluster_id[*it];
|
||||
if (cid != -1)
|
||||
{
|
||||
std::size_t c = m_clusters[cid].label();
|
||||
|
||||
if (c != std::size_t(-1))
|
||||
color = m_label_colors[c];
|
||||
}
|
||||
|
||||
m_points->point_set()->set_color(*it, color);
|
||||
}
|
||||
}
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
else if (index_color == 2) // training
|
||||
{
|
||||
int cid = m_cluster_id[*it];
|
||||
if (cid != -1)
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
float v = std::max (0.f, feature->value(cid) / max);
|
||||
m_red[*it] = (unsigned char)(ramp.r(v) * 255);
|
||||
m_green[*it] = (unsigned char)(ramp.g(v) * 255);
|
||||
m_blue[*it] = (unsigned char)(ramp.b(v) * 255);
|
||||
QColor color (0, 0, 0);
|
||||
int cid = m_cluster_id[*it];
|
||||
float div = 1;
|
||||
|
||||
if (cid != -1)
|
||||
{
|
||||
int c = m_clusters[cid].training();
|
||||
int c2 = m_clusters[cid].label();
|
||||
|
||||
if (c != -1)
|
||||
color = m_label_colors[std::size_t(c)];
|
||||
|
||||
if (c != c2)
|
||||
div = 2;
|
||||
}
|
||||
m_points->point_set()->set_color(*it, color.red() / div, color.green() / div, color.blue() / div);
|
||||
}
|
||||
}
|
||||
else if (index_color == 3) // clusters
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
int cid = m_cluster_id[*it];
|
||||
|
||||
if (cid != -1)
|
||||
{
|
||||
srand(cid);
|
||||
m_points->point_set()->set_color(*it, 64 + rand() % 192, 64 + rand() % 192, 64 + rand() % 192);
|
||||
}
|
||||
else
|
||||
{
|
||||
m_points->point_set()->set_color(*it);
|
||||
}
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
std::size_t corrected_index = index_color - 4;
|
||||
if (corrected_index < m_labels.size()) // Display label probabilities
|
||||
{
|
||||
if (m_label_probabilities.size() <= corrected_index ||
|
||||
m_label_probabilities[corrected_index].size() != m_clusters.size())
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
m_points->point_set()->set_color(*it);
|
||||
}
|
||||
else
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
int cid = m_cluster_id[*it];
|
||||
if (cid != -1)
|
||||
{
|
||||
float v = std::max (0.f, std::min(1.f, m_label_probabilities[corrected_index][cid]));
|
||||
m_points->point_set()->set_color(*it, ramp.r(v) * 255, ramp.g(v) * 255, ramp.b(v) * 255);
|
||||
}
|
||||
else
|
||||
m_points->point_set()->set_color(*it);
|
||||
}
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
m_red[*it] = 0;
|
||||
m_green[*it] = 0;
|
||||
m_blue[*it] = 0;
|
||||
corrected_index -= m_labels.size();
|
||||
if (corrected_index >= m_features.size())
|
||||
{
|
||||
std::cerr << "Error: trying to access feature " << corrected_index << " out of " << m_features.size() << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
Feature_handle feature = m_features[corrected_index];
|
||||
|
||||
float min = std::numeric_limits<float>::max();
|
||||
float max = -std::numeric_limits<float>::max();
|
||||
|
||||
if (vmin != NULL && vmax != NULL
|
||||
&& *vmin != std::numeric_limits<float>::infinity()
|
||||
&& *vmax != std::numeric_limits<float>::infinity())
|
||||
{
|
||||
min = *vmin;
|
||||
max = *vmax;
|
||||
}
|
||||
else
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
int cid = m_cluster_id[*it];
|
||||
if (cid != -1)
|
||||
{
|
||||
if (feature->value(cid) > max)
|
||||
max = feature->value(cid);
|
||||
if (feature->value(cid) < min)
|
||||
min = feature->value(cid);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
int cid = m_cluster_id[*it];
|
||||
if (cid != -1)
|
||||
{
|
||||
float v = (feature->value(cid) - min) / (max - min);
|
||||
if (v < 0.f) v = 0.f;
|
||||
if (v > 1.f) v = 1.f;
|
||||
|
||||
m_points->point_set()->set_color(*it, ramp.r(v) * 255, ramp.g(v) * 255, ramp.b(v) * 255);
|
||||
}
|
||||
else
|
||||
m_points->point_set()->set_color(*it);
|
||||
}
|
||||
|
||||
if (vmin != NULL && vmax != NULL)
|
||||
{
|
||||
*vmin = min;
|
||||
*vmax = max;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->first_selected();
|
||||
it != m_points->point_set()->end(); ++ it)
|
||||
{
|
||||
m_red[*it] = 255;
|
||||
m_green[*it] = 0;
|
||||
m_blue[*it] = 0;
|
||||
}
|
||||
|
||||
m_points->point_set()->set_color(*it, 255, 0, 0);
|
||||
}
|
||||
|
||||
int Cluster_classification::real_index_color() const
|
||||
|
|
@ -532,13 +623,18 @@ void Cluster_classification::reset_indices ()
|
|||
*(indices.begin() + i) = idx ++;
|
||||
}
|
||||
|
||||
void Cluster_classification::compute_features (std::size_t nb_scales)
|
||||
void Cluster_classification::compute_features (std::size_t nb_scales, float voxel_size)
|
||||
{
|
||||
CGAL_assertion (!(m_points->point_set()->empty()));
|
||||
|
||||
reset_indices();
|
||||
|
||||
std::cerr << "Computing pointwise features with " << nb_scales << " scale(s)" << std::endl;
|
||||
std::cerr << "Computing pointwise features with " << nb_scales << " scale(s) and ";
|
||||
if (voxel_size == -1)
|
||||
std::cerr << "automatic voxel size" << std::endl;
|
||||
else
|
||||
std::cerr << "voxel size = " << voxel_size << std::endl;
|
||||
|
||||
m_features.clear();
|
||||
|
||||
Point_set::Vector_map normal_map;
|
||||
|
|
@ -556,7 +652,7 @@ void Cluster_classification::compute_features (std::size_t nb_scales)
|
|||
|
||||
Feature_set pointwise_features;
|
||||
|
||||
Generator generator (*(m_points->point_set()), m_points->point_set()->point_map(), nb_scales);
|
||||
Generator generator (*(m_points->point_set()), m_points->point_set()->point_map(), nb_scales, voxel_size);
|
||||
|
||||
CGAL::Real_timer t;
|
||||
t.start();
|
||||
|
|
@ -620,11 +716,24 @@ void Cluster_classification::compute_features (std::size_t nb_scales)
|
|||
|
||||
delete m_sowf;
|
||||
m_sowf = new Sum_of_weighted_features (m_labels, m_features);
|
||||
delete m_ethz;
|
||||
m_ethz = new ETHZ_random_forest (m_labels, m_features);
|
||||
if (m_ethz != NULL)
|
||||
{
|
||||
delete m_ethz;
|
||||
m_ethz = NULL;
|
||||
}
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
delete m_random_forest;
|
||||
m_random_forest = new Random_forest (m_labels, m_features);
|
||||
if (m_random_forest != NULL)
|
||||
{
|
||||
delete m_random_forest;
|
||||
m_random_forest = NULL;
|
||||
}
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network != NULL)
|
||||
{
|
||||
delete m_neural_network;
|
||||
m_neural_network = NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
std::cerr << "Features = " << m_features.size() << std::endl;
|
||||
|
|
@ -696,8 +805,7 @@ void Cluster_classification::add_remaining_point_set_properties_as_features(Feat
|
|||
}
|
||||
}
|
||||
|
||||
void Cluster_classification::train(int classifier, unsigned int nb_trials,
|
||||
std::size_t num_trees, std::size_t max_depth)
|
||||
void Cluster_classification::train(int classifier, const QMultipleInputDialog& dialog)
|
||||
{
|
||||
if (m_features.size() == 0)
|
||||
{
|
||||
|
|
@ -706,6 +814,11 @@ void Cluster_classification::train(int classifier, unsigned int nb_trials,
|
|||
}
|
||||
reset_indices();
|
||||
|
||||
m_label_probabilities.clear();
|
||||
m_label_probabilities.resize (m_labels.size());
|
||||
for (std::size_t i = 0; i < m_label_probabilities.size(); ++ i)
|
||||
m_label_probabilities[i].resize (m_clusters.size(), -1);
|
||||
|
||||
std::vector<std::size_t> nb_label (m_labels.size(), 0);
|
||||
std::size_t nb_total = 0;
|
||||
|
||||
|
|
@ -730,30 +843,80 @@ void Cluster_classification::train(int classifier, unsigned int nb_trials,
|
|||
|
||||
if (classifier == 0)
|
||||
{
|
||||
m_sowf->train<Concurrency_tag>(training, nb_trials);
|
||||
m_sowf->train<Concurrency_tag>(training, dialog.get<QSpinBox>("trials")->value());
|
||||
CGAL::Classification::classify<Concurrency_tag> (m_clusters,
|
||||
m_labels, *m_sowf,
|
||||
indices);
|
||||
indices, m_label_probabilities);
|
||||
}
|
||||
else if (classifier == 1)
|
||||
{
|
||||
m_ethz->train(training, true, num_trees, max_depth);
|
||||
if (m_ethz != NULL)
|
||||
delete m_ethz;
|
||||
m_ethz = new ETHZ_random_forest (m_labels, m_features);
|
||||
m_ethz->train<Concurrency_tag>(training, true,
|
||||
dialog.get<QSpinBox>("num_trees")->value(),
|
||||
dialog.get<QSpinBox>("max_depth")->value());
|
||||
CGAL::Classification::classify<Concurrency_tag> (m_clusters,
|
||||
m_labels, *m_ethz,
|
||||
indices);
|
||||
indices, m_label_probabilities);
|
||||
}
|
||||
else
|
||||
else if (classifier == 2)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
if (m_random_forest != NULL)
|
||||
delete m_random_forest;
|
||||
m_random_forest = new Random_forest (m_labels, m_features,
|
||||
int(max_depth), 5, 15,
|
||||
int(num_trees));
|
||||
dialog.get<QSpinBox>("max_depth")->value(), 5, 15,
|
||||
dialog.get<QSpinBox>("num_trees")->value());
|
||||
m_random_forest->train (training);
|
||||
CGAL::Classification::classify<Concurrency_tag> (m_clusters,
|
||||
m_labels, *m_random_forest,
|
||||
indices);
|
||||
indices, m_label_probabilities);
|
||||
#endif
|
||||
}
|
||||
else if (classifier == 3)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network != NULL)
|
||||
{
|
||||
if (m_neural_network->initialized())
|
||||
{
|
||||
if (dialog.get<QCheckBox>("restart")->isChecked())
|
||||
{
|
||||
delete m_neural_network;
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
delete m_neural_network;
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
}
|
||||
}
|
||||
else
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
|
||||
std::vector<std::size_t> hidden_layers;
|
||||
|
||||
std::string hl_input = dialog.get<QLineEdit>("hidden_layers")->text().toStdString();
|
||||
if (hl_input != "")
|
||||
{
|
||||
std::istringstream iss(hl_input);
|
||||
int s;
|
||||
while (iss >> s)
|
||||
hidden_layers.push_back (std::size_t(s));
|
||||
}
|
||||
|
||||
m_neural_network->train (training,
|
||||
dialog.get<QCheckBox>("restart")->isChecked(),
|
||||
dialog.get<QSpinBox>("trials")->value(),
|
||||
dialog.get<QDoubleSpinBox>("learning_rate")->value(),
|
||||
dialog.get<QSpinBox>("batch_size")->value(),
|
||||
hidden_layers);
|
||||
|
||||
CGAL::Classification::classify<Concurrency_tag> (m_clusters,
|
||||
m_labels, *m_neural_network,
|
||||
indices, m_label_probabilities);
|
||||
#endif
|
||||
}
|
||||
|
||||
|
|
@ -778,11 +941,36 @@ bool Cluster_classification::run (int method, int classifier,
|
|||
if (classifier == 0)
|
||||
run (method, *m_sowf, subdivisions, smoothing);
|
||||
else if (classifier == 1)
|
||||
{
|
||||
if (m_ethz == NULL)
|
||||
{
|
||||
std::cerr << "Error: ETHZ Random Forest must be trained or have a configuration loaded first" << std::endl;
|
||||
return false;
|
||||
}
|
||||
run (method, *m_ethz, subdivisions, smoothing);
|
||||
}
|
||||
else if (classifier == 2)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
else
|
||||
if (m_random_forest == NULL)
|
||||
{
|
||||
std::cerr << "Error: OpenCV Random Forest must be trained or have a configuration loaded first" << std::endl;
|
||||
return false;
|
||||
}
|
||||
run (method, *m_random_forest, subdivisions, smoothing);
|
||||
#endif
|
||||
}
|
||||
else if (classifier == 3)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network == NULL)
|
||||
{
|
||||
std::cerr << "Error: TensorFlow Neural Network must be trained or have a configuration loaded first" << std::endl;
|
||||
return false;
|
||||
}
|
||||
run (method, *m_neural_network, subdivisions, smoothing);
|
||||
#endif
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -97,7 +97,7 @@ class Cluster_classification : public Item_classification_base
|
|||
xcenter + dx, ycenter + dy, zcenter + dz);
|
||||
}
|
||||
|
||||
void compute_features (std::size_t nb_scales);
|
||||
void compute_features (std::size_t nb_scales, float voxel_size);
|
||||
void add_remaining_point_set_properties_as_features(Feature_set& feature_set);
|
||||
|
||||
void select_random_region();
|
||||
|
|
@ -202,12 +202,11 @@ class Cluster_classification : public Item_classification_base
|
|||
if (m_index_color == 1 || m_index_color == 2)
|
||||
change_color (m_index_color);
|
||||
}
|
||||
void train(int classifier, unsigned int nb_trials,
|
||||
std::size_t num_trees, std::size_t max_depth);
|
||||
void train(int classifier, const QMultipleInputDialog& dialog);
|
||||
bool run (int method, int classifier, std::size_t subdivisions, double smoothing);
|
||||
|
||||
void update_color () { change_color (m_index_color); }
|
||||
void change_color (int index);
|
||||
void change_color (int index, float* vmin = NULL, float* vmax = NULL);
|
||||
CGAL::Three::Scene_item* generate_one_item (const char* name,
|
||||
int label) const
|
||||
{
|
||||
|
|
@ -255,8 +254,6 @@ class Cluster_classification : public Item_classification_base
|
|||
|
||||
}
|
||||
|
||||
bool write_output(std::ostream& out);
|
||||
|
||||
QColor add_new_label (const char* name)
|
||||
{
|
||||
QColor out = Item_classification_base::add_new_label (name);
|
||||
|
|
@ -287,13 +284,7 @@ class Cluster_classification : public Item_classification_base
|
|||
void fill_display_combo_box (QComboBox* cb, QComboBox* cb1) const
|
||||
{
|
||||
cb->addItem ("Clusters");
|
||||
for (std::size_t i = 0; i < m_features.size(); ++ i)
|
||||
{
|
||||
std::ostringstream oss;
|
||||
oss << "Feature " << m_features[i]->name();
|
||||
cb->addItem (oss.str().c_str());
|
||||
cb1->addItem (oss.str().c_str());
|
||||
}
|
||||
Item_classification_base::fill_display_combo_box(cb, cb1);
|
||||
}
|
||||
|
||||
int real_index_color() const;
|
||||
|
|
@ -391,17 +382,18 @@ class Cluster_classification : public Item_classification_base
|
|||
|
||||
std::vector<Cluster> m_clusters;
|
||||
|
||||
Point_set::Property_map<unsigned char> m_red;
|
||||
Point_set::Property_map<unsigned char> m_green;
|
||||
Point_set::Property_map<unsigned char> m_blue;
|
||||
Point_set::Property_map<Color> m_color;
|
||||
Point_set::Property_map<int> m_cluster_id;
|
||||
Point_set::Property_map<int> m_training;
|
||||
Point_set::Property_map<int> m_classif;
|
||||
|
||||
std::vector<std::vector<float> > m_label_probabilities;
|
||||
|
||||
int m_index_color;
|
||||
|
||||
boost::shared_ptr<Local_eigen_analysis> m_eigen;
|
||||
|
||||
bool m_input_is_las;
|
||||
|
||||
}; // end class Cluster_classification
|
||||
|
||||
|
|
|
|||
|
|
@ -4,14 +4,20 @@
|
|||
#include <CGAL/Three/Scene_item.h>
|
||||
|
||||
#include <QComboBox>
|
||||
#include <QLineEdit>
|
||||
#include <QSpinBox>
|
||||
#include <QMultipleInputDialog.h>
|
||||
|
||||
#include <CGAL/Classification/Feature_set.h>
|
||||
#include <CGAL/Classification/Label_set.h>
|
||||
#include <CGAL/Classification/Sum_of_weighted_features_classifier.h>
|
||||
#include <CGAL/Classification/ETHZ_random_forest_classifier.h>
|
||||
#include <CGAL/Classification/ETHZ/Random_forest_classifier.h>
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
#include <CGAL/Classification/OpenCV_random_forest_classifier.h>
|
||||
#include <CGAL/Classification/OpenCV/Random_forest_classifier.h>
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
#include <CGAL/Classification/TensorFlow/Neural_network_classifier.h>
|
||||
#endif
|
||||
|
||||
class Item_classification_base
|
||||
|
|
@ -22,10 +28,13 @@ public:
|
|||
typedef CGAL::Classification::Label_set Label_set;
|
||||
typedef CGAL::Classification::Feature_set Feature_set;
|
||||
typedef CGAL::Classification::Sum_of_weighted_features_classifier Sum_of_weighted_features;
|
||||
typedef CGAL::Classification::ETHZ_random_forest_classifier ETHZ_random_forest;
|
||||
typedef CGAL::Classification::ETHZ::Random_forest_classifier ETHZ_random_forest;
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
typedef CGAL::Classification::OpenCV_random_forest_classifier Random_forest;
|
||||
typedef CGAL::Classification::OpenCV::Random_forest_classifier Random_forest;
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
typedef CGAL::Classification::TensorFlow::Neural_network_classifier<> Neural_network;
|
||||
#endif
|
||||
|
||||
public:
|
||||
|
|
@ -38,8 +47,10 @@ public:
|
|||
|
||||
virtual CGAL::Bbox_3 bbox() { return item()->bbox(); }
|
||||
|
||||
virtual void compute_features (std::size_t nb_scales) = 0;
|
||||
virtual void compute_features (std::size_t nb_scales, float voxel_size) = 0;
|
||||
|
||||
virtual std::string feature_statistics () const { return std::string(); }
|
||||
|
||||
virtual void add_selection_to_training_set (std::size_t label) = 0;
|
||||
virtual void reset_training_set (std::size_t label) = 0;
|
||||
virtual void reset_training_set_of_selection() = 0;
|
||||
|
|
@ -47,19 +58,16 @@ public:
|
|||
|
||||
virtual void select_random_region() = 0;
|
||||
virtual void validate_selection () = 0;
|
||||
virtual void train(int classifier, unsigned int nb_trials,
|
||||
std::size_t num_trees, std::size_t max_depth) = 0;
|
||||
virtual void train(int classifier, const QMultipleInputDialog&) = 0;
|
||||
virtual bool run (int method, int classifier, std::size_t subdivisions, double smoothing) = 0;
|
||||
|
||||
virtual void update_color () = 0;
|
||||
virtual void change_color (int index) = 0;
|
||||
virtual void change_color (int index, float* vmin = NULL, float* vmax = NULL) = 0;
|
||||
virtual CGAL::Three::Scene_item* generate_one_item (const char* name,
|
||||
int label) const = 0;
|
||||
virtual void generate_one_item_per_label(std::vector<CGAL::Three::Scene_item*>& items,
|
||||
const char* name) const = 0;
|
||||
|
||||
virtual bool write_output(std::ostream& out) = 0;
|
||||
|
||||
bool features_computed() const { return (m_features.size() != 0); }
|
||||
std::size_t number_of_features() const { return m_features.size(); }
|
||||
Feature_handle feature(std::size_t i) { return m_features[i]; }
|
||||
|
|
@ -84,6 +92,11 @@ public:
|
|||
delete m_random_forest;
|
||||
m_random_forest = new Random_forest (m_labels, m_features);
|
||||
#endif
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
delete m_neural_network;
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
#endif
|
||||
|
||||
return m_label_colors.back();
|
||||
}
|
||||
|
|
@ -102,7 +115,13 @@ public:
|
|||
delete m_random_forest;
|
||||
m_random_forest = new Random_forest (m_labels, m_features);
|
||||
#endif
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
delete m_neural_network;
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
#endif
|
||||
}
|
||||
|
||||
virtual void clear_labels ()
|
||||
{
|
||||
m_labels.clear();
|
||||
|
|
@ -118,12 +137,24 @@ public:
|
|||
delete m_random_forest;
|
||||
m_random_forest = new Random_forest (m_labels, m_features);
|
||||
#endif
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
delete m_neural_network;
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
#endif
|
||||
}
|
||||
std::size_t number_of_labels() const { return m_labels.size(); }
|
||||
Label_handle label(std::size_t i) { return m_labels[i]; }
|
||||
|
||||
virtual void fill_display_combo_box (QComboBox* cb, QComboBox* cb1) const
|
||||
{
|
||||
for (std::size_t i = 0; i < m_labels.size(); ++ i)
|
||||
{
|
||||
std::ostringstream oss;
|
||||
oss << "Label " << m_labels[i]->name();
|
||||
cb->addItem (oss.str().c_str());
|
||||
cb1->addItem (oss.str().c_str());
|
||||
}
|
||||
for (std::size_t i = 0; i < m_features.size(); ++ i)
|
||||
{
|
||||
std::ostringstream oss;
|
||||
|
|
@ -151,10 +182,17 @@ public:
|
|||
std::ofstream f (filename, std::ios_base::out | std::ios_base::binary);
|
||||
m_ethz->save_configuration (f);
|
||||
}
|
||||
else
|
||||
else if (classifier == 2)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
m_random_forest->save_configuration (filename);
|
||||
#endif
|
||||
}
|
||||
else if (classifier == 3)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
std::ofstream f (filename);
|
||||
m_neural_network->save_configuration (f);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
|
@ -173,13 +211,24 @@ public:
|
|||
}
|
||||
else if (classifier == 1)
|
||||
{
|
||||
if (m_ethz == NULL)
|
||||
m_ethz = new ETHZ_random_forest (m_labels, m_features);
|
||||
std::ifstream f (filename, std::ios_base::in | std::ios_base::binary);
|
||||
m_ethz->load_configuration (f);
|
||||
}
|
||||
else
|
||||
else if (classifier == 2)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
m_random_forest->load_configuration (filename);
|
||||
#endif
|
||||
}
|
||||
else if (classifier == 3)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network == NULL)
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
std::ifstream f (filename);
|
||||
m_neural_network->load_configuration (f, true);
|
||||
#endif
|
||||
}
|
||||
}
|
||||
|
|
@ -189,6 +238,10 @@ public:
|
|||
{
|
||||
m_label_colors[position] = color;
|
||||
}
|
||||
void change_label_name (std::size_t position, const std::string& name)
|
||||
{
|
||||
m_labels[position]->set_name (name);
|
||||
}
|
||||
|
||||
QColor get_new_label_color (const std::string& name)
|
||||
{
|
||||
|
|
@ -248,6 +301,9 @@ protected:
|
|||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
Random_forest* m_random_forest;
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
Neural_network* m_neural_network;
|
||||
#endif
|
||||
|
||||
};
|
||||
|
||||
|
|
|
|||
|
|
@ -17,8 +17,9 @@
|
|||
#include <boost/array.hpp>
|
||||
|
||||
Point_set_item_classification::Point_set_item_classification(Scene_points_with_normal_item* points)
|
||||
: m_points (points),
|
||||
m_generator (NULL)
|
||||
: m_points (points)
|
||||
, m_generator (NULL)
|
||||
, m_input_is_las (false)
|
||||
{
|
||||
m_index_color = 1;
|
||||
|
||||
|
|
@ -58,6 +59,7 @@ Point_set_item_classification::Point_set_item_classification(Scene_points_with_n
|
|||
boost::tie (las_classif, las_found) = m_points->point_set()->property_map<unsigned char>("classification");
|
||||
if (las_found)
|
||||
{
|
||||
m_input_is_las = true;
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
|
|
@ -216,9 +218,12 @@ Point_set_item_classification::Point_set_item_classification(Scene_points_with_n
|
|||
update_comments_of_point_set_item();
|
||||
|
||||
m_sowf = new Sum_of_weighted_features (m_labels, m_features);
|
||||
m_ethz = new ETHZ_random_forest (m_labels, m_features);
|
||||
m_ethz = NULL;
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
m_random_forest = new Random_forest (m_labels, m_features);
|
||||
m_random_forest = NULL;
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
m_neural_network = NULL;
|
||||
#endif
|
||||
}
|
||||
|
||||
|
|
@ -232,14 +237,89 @@ Point_set_item_classification::~Point_set_item_classification()
|
|||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
if (m_random_forest != NULL)
|
||||
delete m_random_forest;
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network != NULL)
|
||||
delete m_neural_network;
|
||||
#endif
|
||||
if (m_generator != NULL)
|
||||
delete m_generator;
|
||||
if (m_points != NULL)
|
||||
{
|
||||
// For LAS saving, convert classification info in the LAS standard
|
||||
if (m_input_is_las)
|
||||
{
|
||||
reset_colors();
|
||||
erase_item();
|
||||
Point_set::Property_map<unsigned char> las_classif
|
||||
= m_points->point_set()->add_property_map<unsigned char>("classification", 0).first;
|
||||
|
||||
std::vector<unsigned char> label_indices;
|
||||
|
||||
unsigned char custom = 19;
|
||||
for (std::size_t i = 0; i < m_labels.size(); ++ i)
|
||||
{
|
||||
if (m_labels[i]->name() == "ground")
|
||||
label_indices.push_back (2);
|
||||
else if (m_labels[i]->name() == "low_veget")
|
||||
label_indices.push_back (3);
|
||||
else if (m_labels[i]->name() == "med_veget" || m_labels[i]->name() == "vegetation")
|
||||
label_indices.push_back (4);
|
||||
else if (m_labels[i]->name() == "high_veget")
|
||||
label_indices.push_back (5);
|
||||
else if (m_labels[i]->name() == "building" || m_labels[i]->name() == "roof")
|
||||
label_indices.push_back (6);
|
||||
else if (m_labels[i]->name() == "noise")
|
||||
label_indices.push_back (7);
|
||||
else if (m_labels[i]->name() == "reserved" || m_labels[i]->name() == "facade")
|
||||
label_indices.push_back (8);
|
||||
else if (m_labels[i]->name() == "water")
|
||||
label_indices.push_back (9);
|
||||
else if (m_labels[i]->name() == "rail")
|
||||
label_indices.push_back (10);
|
||||
else if (m_labels[i]->name() == "road_surface")
|
||||
label_indices.push_back (11);
|
||||
else if (m_labels[i]->name() == "reserved_2")
|
||||
label_indices.push_back (12);
|
||||
else if (m_labels[i]->name() == "wire_guard")
|
||||
label_indices.push_back (13);
|
||||
else if (m_labels[i]->name() == "wire_conduct")
|
||||
label_indices.push_back (14);
|
||||
else if (m_labels[i]->name() == "trans_tower")
|
||||
label_indices.push_back (15);
|
||||
else if (m_labels[i]->name() == "wire_connect")
|
||||
label_indices.push_back (16);
|
||||
else if (m_labels[i]->name() == "bridge_deck")
|
||||
label_indices.push_back (17);
|
||||
else if (m_labels[i]->name() == "high_noise")
|
||||
label_indices.push_back (18);
|
||||
else
|
||||
label_indices.push_back (custom ++);
|
||||
}
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->end(); ++ it)
|
||||
{
|
||||
int c = m_classif[*it];
|
||||
unsigned char lc = 1; // unclassified in LAS standard
|
||||
if (c != -1)
|
||||
lc = label_indices[std::size_t(c)];
|
||||
|
||||
las_classif[*it] = lc;
|
||||
|
||||
int t = m_training[*it];
|
||||
unsigned char lt = 1; // unclassified in LAS standard
|
||||
if (t != -1)
|
||||
lt = label_indices[std::size_t(t)];
|
||||
|
||||
m_training[*it] = int(lt);
|
||||
}
|
||||
|
||||
m_points->point_set()->remove_property_map (m_classif);
|
||||
}
|
||||
|
||||
reset_colors();
|
||||
erase_item();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
|
@ -256,18 +336,8 @@ void Point_set_item_classification::backup_existing_colors_and_add_new()
|
|||
|
||||
m_points->point_set()->remove_colors();
|
||||
}
|
||||
|
||||
m_red = m_points->point_set()->add_property_map<unsigned char>("red").first;
|
||||
m_green = m_points->point_set()->add_property_map<unsigned char>("green").first;
|
||||
m_blue = m_points->point_set()->add_property_map<unsigned char>("blue").first;
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
m_red[*it] = 0;
|
||||
m_green[*it] = 0;
|
||||
m_blue[*it] = 0;
|
||||
}
|
||||
m_points->point_set()->check_colors();
|
||||
|
||||
m_points->point_set()->add_colors();
|
||||
}
|
||||
|
||||
void Point_set_item_classification::reset_colors()
|
||||
|
|
@ -278,40 +348,13 @@ void Point_set_item_classification::reset_colors()
|
|||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
m_red[*it] = m_color[*it][0];
|
||||
m_green[*it] = m_color[*it][1];
|
||||
m_blue[*it] = m_color[*it][2];
|
||||
}
|
||||
m_points->point_set()->set_color(*it, m_color[*it]);
|
||||
|
||||
m_points->point_set()->remove_property_map(m_color);
|
||||
}
|
||||
}
|
||||
|
||||
// Write point set to .PLY file
|
||||
bool Point_set_item_classification::write_output(std::ostream& stream)
|
||||
{
|
||||
if (m_features.size() == 0)
|
||||
return false;
|
||||
|
||||
reset_indices();
|
||||
|
||||
stream.precision (std::numeric_limits<double>::digits10 + 2);
|
||||
|
||||
// std::vector<Color> colors;
|
||||
// for (std::size_t i = 0; i < m_labels.size(); ++ i)
|
||||
// {
|
||||
// Color c = {{ (unsigned char)(m_labels[i].second.red()),
|
||||
// (unsigned char)(m_labels[i].second.green()),
|
||||
// (unsigned char)(m_labels[i].second.blue()) }};
|
||||
// colors.push_back (c);
|
||||
// }
|
||||
|
||||
// m_psc->write_classification_to_ply (stream);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
void Point_set_item_classification::change_color (int index)
|
||||
void Point_set_item_classification::change_color (int index, float* vmin, float* vmax)
|
||||
{
|
||||
m_index_color = index;
|
||||
|
||||
|
|
@ -321,92 +364,129 @@ void Point_set_item_classification::change_color (int index)
|
|||
static Color_ramp ramp;
|
||||
ramp.build_rainbow();
|
||||
reset_indices();
|
||||
|
||||
if (index_color == -1) // item color
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
m_red[*it] = 0;
|
||||
m_green[*it] = 0;
|
||||
m_blue[*it] = 0;
|
||||
}
|
||||
}
|
||||
else if (index_color == 0) // real colors
|
||||
{
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
m_red[*it] = m_color[*it][0];
|
||||
m_green[*it] = m_color[*it][1];
|
||||
m_blue[*it] = m_color[*it][2];
|
||||
}
|
||||
}
|
||||
else if (index_color == 1) // classif
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
QColor color (0, 0, 0);
|
||||
std::size_t c = m_classif[*it];
|
||||
|
||||
if (c != std::size_t(-1))
|
||||
color = m_label_colors[c];
|
||||
|
||||
m_red[*it] = color.red();
|
||||
m_green[*it] = color.green();
|
||||
m_blue[*it] = color.blue();
|
||||
}
|
||||
}
|
||||
else if (index_color == 2) // training
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
QColor color (0, 0, 0);
|
||||
int c = m_training[*it];
|
||||
int c2 = m_classif[*it];
|
||||
|
||||
if (c != -1)
|
||||
color = m_label_colors[std::size_t(c)];
|
||||
|
||||
float div = 1;
|
||||
if (c != c2)
|
||||
div = 2;
|
||||
|
||||
m_red[*it] = (color.red() / div);
|
||||
m_green[*it] = (color.green() / div);
|
||||
m_blue[*it] = (color.blue() / div);
|
||||
}
|
||||
}
|
||||
m_points->point_set()->remove_colors();
|
||||
else
|
||||
{
|
||||
if (!m_points->point_set()->has_colors())
|
||||
m_points->point_set()->add_colors();
|
||||
|
||||
if (index_color == 0) // real colors
|
||||
{
|
||||
Feature_handle feature = m_features[index_color - 3];
|
||||
|
||||
float max = 0.;
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
if (feature->value(*it) > max)
|
||||
max = feature->value(*it);
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
float v = std::max (0.f, feature->value(*it) / max);
|
||||
m_red[*it] = (unsigned char)(ramp.r(v) * 255);
|
||||
m_green[*it] = (unsigned char)(ramp.g(v) * 255);
|
||||
m_blue[*it] = (unsigned char)(ramp.b(v) * 255);
|
||||
}
|
||||
m_points->point_set()->set_color(*it, m_color[*it]);
|
||||
}
|
||||
else if (index_color == 1) // classif
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
QColor color (0, 0, 0);
|
||||
std::size_t c = m_classif[*it];
|
||||
|
||||
if (c != std::size_t(-1))
|
||||
color = m_label_colors[c];
|
||||
|
||||
m_points->point_set()->set_color(*it, color);
|
||||
}
|
||||
}
|
||||
else if (index_color == 2) // training
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
QColor color (0, 0, 0);
|
||||
int c = m_training[*it];
|
||||
int c2 = m_classif[*it];
|
||||
|
||||
if (c != -1)
|
||||
color = m_label_colors[std::size_t(c)];
|
||||
|
||||
float div = 1;
|
||||
if (c != c2)
|
||||
div = 2;
|
||||
|
||||
m_points->point_set()->set_color(*it, color.red() / div, color.green() / div, color.blue() / div);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
std::size_t corrected_index = index_color - 3;
|
||||
if (corrected_index < m_labels.size()) // Display label probabilities
|
||||
{
|
||||
if (m_label_probabilities.size() <= corrected_index ||
|
||||
m_label_probabilities[corrected_index].size() != m_points->point_set()->size())
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
m_points->point_set()->set_color(*it);
|
||||
}
|
||||
else
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
float v = std::max (0.f, std::min(1.f, m_label_probabilities[corrected_index][*it]));
|
||||
m_points->point_set()->set_color(*it, ramp.r(v) * 255, ramp.g(v) * 255, ramp.b(v) * 255);
|
||||
}
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
corrected_index -= m_labels.size();
|
||||
if (corrected_index >= m_features.size())
|
||||
{
|
||||
std::cerr << "Error: trying to access feature " << corrected_index << " out of " << m_features.size() << std::endl;
|
||||
return;
|
||||
}
|
||||
Feature_handle feature = m_features[corrected_index];
|
||||
|
||||
float min = std::numeric_limits<float>::max();
|
||||
float max = -std::numeric_limits<float>::max();
|
||||
|
||||
if (vmin != NULL && vmax != NULL
|
||||
&& *vmin != std::numeric_limits<float>::infinity()
|
||||
&& *vmax != std::numeric_limits<float>::infinity())
|
||||
{
|
||||
min = *vmin;
|
||||
max = *vmax;
|
||||
}
|
||||
else
|
||||
{
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->end(); ++ it)
|
||||
{
|
||||
float v = feature->value(*it);
|
||||
min = (std::min) (min, v);
|
||||
max = (std::max) (max, v);
|
||||
}
|
||||
}
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
{
|
||||
float v = (feature->value(*it) - min) / (max - min);
|
||||
if (v < 0.f) v = 0.f;
|
||||
if (v > 1.f) v = 1.f;
|
||||
|
||||
m_points->point_set()->set_color(*it, ramp.r(v) * 255, ramp.g(v) * 255, ramp.b(v) * 255);
|
||||
}
|
||||
|
||||
if (vmin != NULL && vmax != NULL)
|
||||
{
|
||||
*vmin = min;
|
||||
*vmax = max;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->first_selected();
|
||||
it != m_points->point_set()->end(); ++ it)
|
||||
{
|
||||
m_red[*it] = 255;
|
||||
m_green[*it] = 0;
|
||||
m_blue[*it] = 0;
|
||||
}
|
||||
|
||||
m_points->point_set()->set_color(*it, 255, 0, 0);
|
||||
}
|
||||
|
||||
int Point_set_item_classification::real_index_color() const
|
||||
|
|
@ -430,7 +510,7 @@ void Point_set_item_classification::reset_indices ()
|
|||
*(indices.begin() + i) = idx ++;
|
||||
}
|
||||
|
||||
void Point_set_item_classification::compute_features (std::size_t nb_scales)
|
||||
void Point_set_item_classification::compute_features (std::size_t nb_scales, float voxel_size)
|
||||
{
|
||||
CGAL_assertion (!(m_points->point_set()->empty()));
|
||||
|
||||
|
|
@ -439,7 +519,12 @@ void Point_set_item_classification::compute_features (std::size_t nb_scales)
|
|||
|
||||
reset_indices();
|
||||
|
||||
std::cerr << "Computing features with " << nb_scales << " scale(s)" << std::endl;
|
||||
std::cerr << "Computing features with " << nb_scales << " scale(s) and ";
|
||||
if (voxel_size == -1)
|
||||
std::cerr << "automatic voxel size" << std::endl;
|
||||
else
|
||||
std::cerr << "voxel size = " << voxel_size << std::endl;
|
||||
|
||||
m_features.clear();
|
||||
|
||||
Point_set::Vector_map normal_map;
|
||||
|
|
@ -455,7 +540,7 @@ void Point_set_item_classification::compute_features (std::size_t nb_scales)
|
|||
if (!echo)
|
||||
boost::tie (echo_map, echo) = m_points->point_set()->template property_map<boost::uint8_t>("number_of_returns");
|
||||
|
||||
m_generator = new Generator (*(m_points->point_set()), m_points->point_set()->point_map(), nb_scales);
|
||||
m_generator = new Generator (*(m_points->point_set()), m_points->point_set()->point_map(), nb_scales, voxel_size);
|
||||
|
||||
CGAL::Real_timer t;
|
||||
t.start();
|
||||
|
|
@ -480,12 +565,24 @@ void Point_set_item_classification::compute_features (std::size_t nb_scales)
|
|||
|
||||
delete m_sowf;
|
||||
m_sowf = new Sum_of_weighted_features (m_labels, m_features);
|
||||
delete m_ethz;
|
||||
m_ethz = new ETHZ_random_forest (m_labels, m_features);
|
||||
|
||||
if (m_ethz != NULL)
|
||||
{
|
||||
delete m_ethz;
|
||||
m_ethz = NULL;
|
||||
}
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
delete m_random_forest;
|
||||
m_random_forest = new Random_forest (m_labels, m_features);
|
||||
if (m_random_forest != NULL)
|
||||
{
|
||||
delete m_random_forest;
|
||||
m_random_forest = NULL;
|
||||
}
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network != NULL)
|
||||
{
|
||||
delete m_neural_network;
|
||||
m_neural_network = NULL;
|
||||
}
|
||||
#endif
|
||||
|
||||
t.stop();
|
||||
|
|
@ -598,8 +695,7 @@ void Point_set_item_classification::add_remaining_point_set_properties_as_featur
|
|||
}
|
||||
}
|
||||
|
||||
void Point_set_item_classification::train(int classifier, unsigned int nb_trials,
|
||||
std::size_t num_trees, std::size_t max_depth)
|
||||
void Point_set_item_classification::train(int classifier, const QMultipleInputDialog& dialog)
|
||||
{
|
||||
if (m_features.size() == 0)
|
||||
{
|
||||
|
|
@ -608,6 +704,11 @@ void Point_set_item_classification::train(int classifier, unsigned int nb_trials
|
|||
}
|
||||
reset_indices();
|
||||
|
||||
m_label_probabilities.clear();
|
||||
m_label_probabilities.resize (m_labels.size());
|
||||
for (std::size_t i = 0; i < m_label_probabilities.size(); ++ i)
|
||||
m_label_probabilities[i].resize (m_points->point_set()->size(), -1);
|
||||
|
||||
std::vector<int> training (m_points->point_set()->size(), -1);
|
||||
std::vector<int> indices (m_points->point_set()->size(), -1);
|
||||
|
||||
|
|
@ -632,32 +733,83 @@ void Point_set_item_classification::train(int classifier, unsigned int nb_trials
|
|||
|
||||
if (classifier == 0)
|
||||
{
|
||||
m_sowf->train<Concurrency_tag>(training, nb_trials);
|
||||
m_sowf->train<Concurrency_tag>(training, dialog.get<QSpinBox>("trials")->value());
|
||||
CGAL::Classification::classify<Concurrency_tag> (*(m_points->point_set()),
|
||||
m_labels, *m_sowf,
|
||||
indices);
|
||||
indices, m_label_probabilities);
|
||||
}
|
||||
else if (classifier == 1)
|
||||
{
|
||||
m_ethz->train(training, true, num_trees, max_depth);
|
||||
if (m_ethz != NULL)
|
||||
delete m_ethz;
|
||||
m_ethz = new ETHZ_random_forest (m_labels, m_features);
|
||||
m_ethz->train<Concurrency_tag>(training, true,
|
||||
dialog.get<QSpinBox>("num_trees")->value(),
|
||||
dialog.get<QSpinBox>("max_depth")->value());
|
||||
CGAL::Classification::classify<Concurrency_tag> (*(m_points->point_set()),
|
||||
m_labels, *m_ethz,
|
||||
indices);
|
||||
indices, m_label_probabilities);
|
||||
}
|
||||
else
|
||||
{
|
||||
else if (classifier == 2)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
if (m_random_forest != NULL)
|
||||
delete m_random_forest;
|
||||
m_random_forest = new Random_forest (m_labels, m_features,
|
||||
int(max_depth), 5, 15,
|
||||
int(num_trees));
|
||||
m_random_forest->train (training);
|
||||
CGAL::Classification::classify<Concurrency_tag> (*(m_points->point_set()),
|
||||
m_labels, *m_random_forest,
|
||||
indices);
|
||||
if (m_random_forest != NULL)
|
||||
delete m_random_forest;
|
||||
m_random_forest = new Random_forest (m_labels, m_features,
|
||||
dialog.get<QSpinBox>("max_depth")->value(), 5, 15,
|
||||
dialog.get<QSpinBox>("num_trees")->value());
|
||||
m_random_forest->train (training);
|
||||
CGAL::Classification::classify<Concurrency_tag> (*(m_points->point_set()),
|
||||
m_labels, *m_random_forest,
|
||||
indices, m_label_probabilities);
|
||||
#endif
|
||||
}
|
||||
else if (classifier == 3)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network != NULL)
|
||||
{
|
||||
if (m_neural_network->initialized())
|
||||
{
|
||||
if (dialog.get<QCheckBox>("restart")->isChecked())
|
||||
{
|
||||
delete m_neural_network;
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
delete m_neural_network;
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
}
|
||||
}
|
||||
else
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
|
||||
std::vector<std::size_t> hidden_layers;
|
||||
|
||||
std::string hl_input = dialog.get<QLineEdit>("hidden_layers")->text().toStdString();
|
||||
if (hl_input != "")
|
||||
{
|
||||
std::istringstream iss(hl_input);
|
||||
int s;
|
||||
while (iss >> s)
|
||||
hidden_layers.push_back (std::size_t(s));
|
||||
}
|
||||
|
||||
m_neural_network->train (training,
|
||||
dialog.get<QCheckBox>("restart")->isChecked(),
|
||||
dialog.get<QSpinBox>("trials")->value(),
|
||||
dialog.get<QDoubleSpinBox>("learning_rate")->value(),
|
||||
dialog.get<QSpinBox>("batch_size")->value(),
|
||||
hidden_layers);
|
||||
|
||||
CGAL::Classification::classify<Concurrency_tag> (*(m_points->point_set()),
|
||||
m_labels, *m_neural_network,
|
||||
indices, m_label_probabilities);
|
||||
#endif
|
||||
}
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin();
|
||||
it != m_points->point_set()->first_selected(); ++ it)
|
||||
m_classif[*it] = indices[*it];
|
||||
|
|
@ -680,11 +832,36 @@ bool Point_set_item_classification::run (int method, int classifier,
|
|||
if (classifier == 0)
|
||||
run (method, *m_sowf, subdivisions, smoothing);
|
||||
else if (classifier == 1)
|
||||
{
|
||||
if (m_ethz == NULL)
|
||||
{
|
||||
std::cerr << "Error: ETHZ Random Forest must be trained or have a configuration loaded first" << std::endl;
|
||||
return false;
|
||||
}
|
||||
run (method, *m_ethz, subdivisions, smoothing);
|
||||
}
|
||||
else if (classifier == 2)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
else
|
||||
if (m_random_forest == NULL)
|
||||
{
|
||||
std::cerr << "Error: OpenCV Random Forest must be trained or have a configuration loaded first" << std::endl;
|
||||
return false;
|
||||
}
|
||||
run (method, *m_random_forest, subdivisions, smoothing);
|
||||
#endif
|
||||
}
|
||||
else if (classifier == 3)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network == NULL)
|
||||
{
|
||||
std::cerr << "Error: TensorFlow Neural Network must be trained or have a configuration loaded first" << std::endl;
|
||||
return false;
|
||||
}
|
||||
run (method, *m_neural_network, subdivisions, smoothing);
|
||||
#endif
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@
|
|||
|
||||
//#define CGAL_DO_NOT_USE_BOYKOV_KOLMOGOROV_MAXFLOW_SOFTWARE
|
||||
#define CGAL_CLASSIFICATION_VERBOSE
|
||||
#define CGAL_CLASSTRAINING_VERBOSE
|
||||
|
||||
#include <CGAL/Three/Scene_item.h>
|
||||
|
||||
|
|
@ -116,7 +117,35 @@ class Point_set_item_classification : public Item_classification_base
|
|||
xcenter + dx, ycenter + dy, zcenter + dz);
|
||||
}
|
||||
|
||||
void compute_features (std::size_t nb_scales);
|
||||
void compute_features (std::size_t nb_scales, float voxel_size);
|
||||
|
||||
std::string feature_statistics() const
|
||||
{
|
||||
std::ostringstream oss;
|
||||
|
||||
for (std::size_t i = 0; i < m_features.size(); ++ i)
|
||||
{
|
||||
float vmin = std::numeric_limits<float>::max();
|
||||
float vmax = -std::numeric_limits<float>::max();
|
||||
float vmean = 0.f;
|
||||
std::size_t nb = 0;
|
||||
|
||||
for (Point_set::const_iterator it = m_points->point_set()->begin_or_selection_begin();
|
||||
it != m_points->point_set()->end(); ++ it)
|
||||
{
|
||||
float v = m_features[i]->value(std::size_t(it - m_points->point_set()->begin()));
|
||||
vmin = (std::min) (vmin, v);
|
||||
vmax = (std::max) (vmax, v);
|
||||
vmean += v;
|
||||
++ nb;
|
||||
}
|
||||
|
||||
oss << m_features[i]->name() << " in [ " << vmin << " ; " << vmax << " ], mean = " << vmean / nb << std::endl;
|
||||
}
|
||||
|
||||
return oss.str();
|
||||
}
|
||||
|
||||
void add_remaining_point_set_properties_as_features();
|
||||
|
||||
void select_random_region();
|
||||
|
|
@ -129,8 +158,12 @@ class Point_set_item_classification : public Item_classification_base
|
|||
Pmap pmap;
|
||||
boost::tie (pmap, okay) = m_points->point_set()->template property_map<Type>(name.c_str());
|
||||
if (okay)
|
||||
{
|
||||
std::cerr << "Adding property<" << CGAL::demangle(typeid(Type).name()) << ">("
|
||||
<< name << ") as feature" << std::endl;
|
||||
m_features.template add<CGAL::Classification::Feature::Simple_feature <Point_set, Pmap> >
|
||||
(*(m_points->point_set()), pmap, name.c_str());
|
||||
}
|
||||
|
||||
return okay;
|
||||
}
|
||||
|
|
@ -186,12 +219,11 @@ class Point_set_item_classification : public Item_classification_base
|
|||
if (m_index_color == 1 || m_index_color == 2)
|
||||
change_color (m_index_color);
|
||||
}
|
||||
void train(int classifier, unsigned int nb_trials,
|
||||
std::size_t num_trees, std::size_t max_depth);
|
||||
void train(int classifier, const QMultipleInputDialog& dialog);
|
||||
bool run (int method, int classifier, std::size_t subdivisions, double smoothing);
|
||||
|
||||
void update_color () { change_color (m_index_color); }
|
||||
void change_color (int index);
|
||||
void change_color (int index, float* vmin = NULL, float* vmax = NULL);
|
||||
CGAL::Three::Scene_item* generate_one_item (const char* name,
|
||||
int label) const
|
||||
{
|
||||
|
|
@ -231,8 +263,6 @@ class Point_set_item_classification : public Item_classification_base
|
|||
}
|
||||
}
|
||||
|
||||
bool write_output(std::ostream& out);
|
||||
|
||||
QColor add_new_label (const char* name)
|
||||
{
|
||||
QColor out = Item_classification_base::add_new_label (name);
|
||||
|
|
@ -301,10 +331,11 @@ class Point_set_item_classification : public Item_classification_base
|
|||
{
|
||||
std::vector<int> indices (m_points->point_set()->size(), -1);
|
||||
|
||||
m_label_probabilities.clear();
|
||||
if (method == 0)
|
||||
CGAL::Classification::classify<Concurrency_tag> (*(m_points->point_set()),
|
||||
m_labels, classifier,
|
||||
indices);
|
||||
indices, m_label_probabilities);
|
||||
else if (method == 1)
|
||||
{
|
||||
if (m_clusters.empty()) // Use real local smoothing
|
||||
|
|
@ -367,9 +398,8 @@ class Point_set_item_classification : public Item_classification_base
|
|||
|
||||
std::vector<Cluster> m_clusters;
|
||||
|
||||
Point_set::Property_map<unsigned char> m_red;
|
||||
Point_set::Property_map<unsigned char> m_green;
|
||||
Point_set::Property_map<unsigned char> m_blue;
|
||||
std::vector<std::vector<float> > m_label_probabilities;
|
||||
|
||||
Point_set::Property_map<Color> m_color;
|
||||
Point_set::Property_map<int> m_training;
|
||||
Point_set::Property_map<int> m_classif;
|
||||
|
|
@ -377,6 +407,8 @@ class Point_set_item_classification : public Item_classification_base
|
|||
Generator* m_generator;
|
||||
|
||||
int m_index_color;
|
||||
|
||||
bool m_input_is_las;
|
||||
|
||||
}; // end class Point_set_item_classification
|
||||
|
||||
|
|
|
|||
|
|
@ -6,6 +6,8 @@
|
|||
|
||||
#include <CGAL/Three/Viewer_interface.h>
|
||||
|
||||
#include <QLineEdit>
|
||||
|
||||
#include <set>
|
||||
#include <stack>
|
||||
#include <algorithm>
|
||||
|
|
@ -31,9 +33,12 @@ Surface_mesh_item_classification::Surface_mesh_item_classification(Scene_surface
|
|||
m_label_colors.push_back (this->get_new_label_color (m_labels[i]->name()));
|
||||
|
||||
m_sowf = new Sum_of_weighted_features (m_labels, m_features);
|
||||
m_ethz = new ETHZ_random_forest (m_labels, m_features);
|
||||
m_ethz = NULL;
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
m_random_forest = new Random_forest (m_labels, m_features);
|
||||
m_random_forest = NULL;
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
m_neural_network = NULL;
|
||||
#endif
|
||||
}
|
||||
|
||||
|
|
@ -47,6 +52,10 @@ Surface_mesh_item_classification::~Surface_mesh_item_classification()
|
|||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
if (m_random_forest != NULL)
|
||||
delete m_random_forest;
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network != NULL)
|
||||
delete m_neural_network;
|
||||
#endif
|
||||
if (m_generator != NULL)
|
||||
delete m_generator;
|
||||
|
|
@ -71,14 +80,7 @@ void Surface_mesh_item_classification::backup_existing_colors_and_add_new()
|
|||
m_mesh->polyhedron()->add_property_map<face_descriptor, CGAL::Color>("f:color", CGAL::Color(128,128,128)).first;
|
||||
}
|
||||
|
||||
bool Surface_mesh_item_classification::write_output(std::ostream& )
|
||||
{
|
||||
// TODO
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
void Surface_mesh_item_classification::change_color (int index)
|
||||
void Surface_mesh_item_classification::change_color (int index, float* vmin, float* vmax)
|
||||
{
|
||||
m_index_color = index;
|
||||
int index_color = index;
|
||||
|
|
@ -132,28 +134,90 @@ void Surface_mesh_item_classification::change_color (int index)
|
|||
}
|
||||
else
|
||||
{
|
||||
Feature_handle feature = m_features[index_color - 3];
|
||||
|
||||
float max = 0.;
|
||||
BOOST_FOREACH(face_descriptor fd, faces(*(m_mesh->polyhedron())))
|
||||
std::size_t corrected_index = index_color - 3;
|
||||
if (corrected_index < m_labels.size()) // Display label probabilities
|
||||
{
|
||||
if (feature->value(fd) > max)
|
||||
max = feature->value(fd);
|
||||
if (m_label_probabilities.size() <= corrected_index ||
|
||||
m_label_probabilities[corrected_index].size() != num_faces(*(m_mesh->polyhedron())))
|
||||
{
|
||||
BOOST_FOREACH(face_descriptor fd, faces(*(m_mesh->polyhedron())))
|
||||
{
|
||||
m_color[fd] = CGAL::Color((unsigned char)(128),
|
||||
(unsigned char)(128),
|
||||
(unsigned char)(128));
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
BOOST_FOREACH(face_descriptor fd, faces(*(m_mesh->polyhedron())))
|
||||
{
|
||||
float v = std::max (0.f, std::min(1.f, m_label_probabilities[corrected_index][fd]));
|
||||
m_color[fd] = CGAL::Color((unsigned char)(ramp.r(v) * 255),
|
||||
(unsigned char)(ramp.g(v) * 255),
|
||||
(unsigned char)(ramp.b(v) * 255));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
BOOST_FOREACH(face_descriptor fd, faces(*(m_mesh->polyhedron())))
|
||||
else
|
||||
{
|
||||
float v = std::max (0.f, feature->value(fd) / max);
|
||||
m_color[fd] = CGAL::Color((unsigned char)(ramp.r(v) * 255),
|
||||
(unsigned char)(ramp.g(v) * 255),
|
||||
(unsigned char)(ramp.b(v) * 255));
|
||||
corrected_index -= m_labels.size();
|
||||
if (corrected_index >= m_features.size())
|
||||
{
|
||||
std::cerr << "Error: trying to access feature " << corrected_index << " out of " << m_features.size() << std::endl;
|
||||
return;
|
||||
}
|
||||
|
||||
Feature_handle feature = m_features[corrected_index];
|
||||
|
||||
float min = std::numeric_limits<float>::max();
|
||||
float max = -std::numeric_limits<float>::max();
|
||||
|
||||
if (vmin != NULL && vmax != NULL
|
||||
&& *vmin != std::numeric_limits<float>::infinity()
|
||||
&& *vmax != std::numeric_limits<float>::infinity())
|
||||
{
|
||||
min = *vmin;
|
||||
max = *vmax;
|
||||
}
|
||||
else
|
||||
{
|
||||
BOOST_FOREACH(face_descriptor fd, faces(*(m_mesh->polyhedron())))
|
||||
{
|
||||
if (feature->value(fd) > max)
|
||||
max = feature->value(fd);
|
||||
if (feature->value(fd) < min)
|
||||
min = feature->value(fd);
|
||||
}
|
||||
}
|
||||
|
||||
BOOST_FOREACH(face_descriptor fd, faces(*(m_mesh->polyhedron())))
|
||||
{
|
||||
float v = (feature->value(fd) - min) / (max - min);
|
||||
if (v < 0.f) v = 0.f;
|
||||
if (v > 1.f) v = 1.f;
|
||||
|
||||
m_color[fd] = CGAL::Color((unsigned char)(ramp.r(v) * 255),
|
||||
(unsigned char)(ramp.g(v) * 255),
|
||||
(unsigned char)(ramp.b(v) * 255));
|
||||
}
|
||||
|
||||
if (vmin != NULL && vmax != NULL)
|
||||
{
|
||||
*vmin = min;
|
||||
*vmax = max;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void Surface_mesh_item_classification::compute_features (std::size_t nb_scales)
|
||||
void Surface_mesh_item_classification::compute_features (std::size_t nb_scales, float voxel_size)
|
||||
{
|
||||
std::cerr << "Computing features with " << nb_scales << " scale(s)" << std::endl;
|
||||
std::cerr << "Computing features with " << nb_scales << " scale(s) and ";
|
||||
if (voxel_size == -1)
|
||||
std::cerr << "automatic voxel size" << std::endl;
|
||||
else
|
||||
std::cerr << "voxel size = " << voxel_size << std::endl;
|
||||
|
||||
m_features.clear();
|
||||
|
||||
if (m_generator != NULL)
|
||||
|
|
@ -161,7 +225,7 @@ void Surface_mesh_item_classification::compute_features (std::size_t nb_scales)
|
|||
|
||||
Face_center_map fc_map (m_mesh->polyhedron());
|
||||
|
||||
m_generator = new Generator (*(m_mesh->polyhedron()), fc_map, nb_scales);
|
||||
m_generator = new Generator (*(m_mesh->polyhedron()), fc_map, nb_scales, voxel_size);
|
||||
|
||||
#ifdef CGAL_LINKED_WITH_TBB
|
||||
m_features.begin_parallel_additions();
|
||||
|
|
@ -176,16 +240,29 @@ void Surface_mesh_item_classification::compute_features (std::size_t nb_scales)
|
|||
|
||||
delete m_sowf;
|
||||
m_sowf = new Sum_of_weighted_features (m_labels, m_features);
|
||||
if (m_ethz != NULL)
|
||||
{
|
||||
delete m_ethz;
|
||||
m_ethz = NULL;
|
||||
}
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
delete m_random_forest;
|
||||
m_random_forest = new Random_forest (m_labels, m_features);
|
||||
if (m_random_forest != NULL)
|
||||
{
|
||||
delete m_random_forest;
|
||||
m_random_forest = NULL;
|
||||
}
|
||||
#endif
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network != NULL)
|
||||
{
|
||||
delete m_neural_network;
|
||||
m_neural_network = NULL;
|
||||
}
|
||||
#endif
|
||||
std::cerr << "Features = " << m_features.size() << std::endl;
|
||||
}
|
||||
|
||||
void Surface_mesh_item_classification::train
|
||||
(int classifier, unsigned int nb_trials,
|
||||
std::size_t num_trees, std::size_t max_depth)
|
||||
void Surface_mesh_item_classification::train (int classifier, const QMultipleInputDialog& dialog)
|
||||
{
|
||||
if (m_features.size() == 0)
|
||||
{
|
||||
|
|
@ -193,6 +270,11 @@ void Surface_mesh_item_classification::train
|
|||
return;
|
||||
}
|
||||
|
||||
m_label_probabilities.clear();
|
||||
m_label_probabilities.resize (m_labels.size());
|
||||
for (std::size_t i = 0; i < m_label_probabilities.size(); ++ i)
|
||||
m_label_probabilities[i].resize (num_faces(*(m_mesh->polyhedron())));
|
||||
|
||||
std::vector<std::size_t> training (num_faces(*(m_mesh->polyhedron())), std::size_t(-1));
|
||||
std::vector<std::size_t> indices (num_faces(*(m_mesh->polyhedron())), std::size_t(-1));
|
||||
|
||||
|
|
@ -216,31 +298,81 @@ void Surface_mesh_item_classification::train
|
|||
|
||||
if (classifier == 0)
|
||||
{
|
||||
m_sowf->train<Concurrency_tag>(training, nb_trials);
|
||||
m_sowf->train<Concurrency_tag>(training, dialog.get<QSpinBox>("trials")->value());
|
||||
CGAL::Classification::classify<Concurrency_tag> (m_mesh->polyhedron()->faces(),
|
||||
m_labels, *m_sowf,
|
||||
indices);
|
||||
indices, m_label_probabilities);
|
||||
}
|
||||
else if (classifier == 1)
|
||||
{
|
||||
m_ethz->train(training, true, num_trees, max_depth);
|
||||
if (m_ethz != NULL)
|
||||
delete m_ethz;
|
||||
m_ethz = new ETHZ_random_forest (m_labels, m_features);
|
||||
m_ethz->train<Concurrency_tag>(training, true,
|
||||
dialog.get<QSpinBox>("num_trees")->value(),
|
||||
dialog.get<QSpinBox>("max_depth")->value());
|
||||
CGAL::Classification::classify<Concurrency_tag> (m_mesh->polyhedron()->faces(),
|
||||
m_labels, *m_ethz,
|
||||
indices);
|
||||
indices, m_label_probabilities);
|
||||
}
|
||||
else
|
||||
else if (classifier == 2)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
if (m_random_forest != NULL)
|
||||
delete m_random_forest;
|
||||
m_random_forest = new Random_forest (m_labels, m_features,
|
||||
int(max_depth), 5, 15,
|
||||
int(num_trees));
|
||||
dialog.get<QSpinBox>("max_depth")->value(), 5, 15,
|
||||
dialog.get<QSpinBox>("num_trees")->value());
|
||||
m_random_forest->train (training);
|
||||
|
||||
CGAL::Classification::classify<Concurrency_tag> (m_mesh->polyhedron()->faces(),
|
||||
m_labels, *m_random_forest,
|
||||
indices);
|
||||
indices, m_label_probabilities);
|
||||
#endif
|
||||
}
|
||||
else if (classifier == 3)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network != NULL)
|
||||
{
|
||||
if (m_neural_network->initialized())
|
||||
{
|
||||
if (dialog.get<QCheckBox>("restart")->isChecked())
|
||||
{
|
||||
delete m_neural_network;
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
delete m_neural_network;
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
}
|
||||
}
|
||||
else
|
||||
m_neural_network = new Neural_network (m_labels, m_features);
|
||||
|
||||
std::vector<std::size_t> hidden_layers;
|
||||
|
||||
std::string hl_input = dialog.get<QLineEdit>("hidden_layers")->text().toStdString();
|
||||
if (hl_input != "")
|
||||
{
|
||||
std::istringstream iss(hl_input);
|
||||
int s;
|
||||
while (iss >> s)
|
||||
hidden_layers.push_back (std::size_t(s));
|
||||
}
|
||||
|
||||
m_neural_network->train (training,
|
||||
dialog.get<QCheckBox>("restart")->isChecked(),
|
||||
dialog.get<QSpinBox>("trials")->value(),
|
||||
dialog.get<QDoubleSpinBox>("learning_rate")->value(),
|
||||
dialog.get<QSpinBox>("batch_size")->value(),
|
||||
hidden_layers);
|
||||
|
||||
CGAL::Classification::classify<Concurrency_tag> (m_mesh->polyhedron()->faces(),
|
||||
m_labels, *m_neural_network,
|
||||
indices, m_label_probabilities);
|
||||
#endif
|
||||
}
|
||||
|
||||
|
|
@ -263,11 +395,36 @@ bool Surface_mesh_item_classification::run (int method, int classifier,
|
|||
if (classifier == 0)
|
||||
run (method, *m_sowf, subdivisions, smoothing);
|
||||
else if (classifier == 1)
|
||||
{
|
||||
if (m_ethz == NULL)
|
||||
{
|
||||
std::cerr << "Error: ETHZ Random Forest must be trained or have a configuration loaded first" << std::endl;
|
||||
return false;
|
||||
}
|
||||
run (method, *m_ethz, subdivisions, smoothing);
|
||||
}
|
||||
else if (classifier == 2)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_OPENCV
|
||||
else
|
||||
if (m_random_forest == NULL)
|
||||
{
|
||||
std::cerr << "Error: OpenCV Random Forest must be trained or have a configuration loaded first" << std::endl;
|
||||
return false;
|
||||
}
|
||||
run (method, *m_random_forest, subdivisions, smoothing);
|
||||
#endif
|
||||
}
|
||||
else if (classifier == 3)
|
||||
{
|
||||
#ifdef CGAL_LINKED_WITH_TENSORFLOW
|
||||
if (m_neural_network == NULL)
|
||||
{
|
||||
std::cerr << "Error: TensorFlow Neural Network must be trained or have a configuration loaded first" << std::endl;
|
||||
return false;
|
||||
}
|
||||
run (method, *m_neural_network, subdivisions, smoothing);
|
||||
#endif
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -50,7 +50,7 @@ public:
|
|||
CGAL::Three::Scene_item* item() { return m_mesh; }
|
||||
void erase_item() { m_mesh = NULL; }
|
||||
|
||||
void compute_features (std::size_t nb_scales);
|
||||
void compute_features (std::size_t nb_scales, float voxel_size);
|
||||
|
||||
void add_selection_to_training_set (std::size_t label)
|
||||
{
|
||||
|
|
@ -116,12 +116,11 @@ public:
|
|||
if (m_index_color == 1 || m_index_color == 2)
|
||||
change_color (m_index_color);
|
||||
}
|
||||
void train(int classifier, unsigned int nb_trials,
|
||||
std::size_t num_trees, std::size_t max_depth);
|
||||
void train(int classifier, const QMultipleInputDialog& dialog);
|
||||
bool run (int method, int classifier, std::size_t subdivisions, double smoothing);
|
||||
|
||||
void update_color() { change_color (m_index_color); }
|
||||
void change_color (int index);
|
||||
void change_color (int index, float* vmin = NULL, float* vmax = NULL);
|
||||
CGAL::Three::Scene_item* generate_one_item (const char* /* name */,
|
||||
int /* label */) const
|
||||
{
|
||||
|
|
@ -136,8 +135,6 @@ public:
|
|||
std::cerr << "Warning: operation not yet available for meshes." << std::endl;
|
||||
}
|
||||
|
||||
bool write_output(std::ostream& out);
|
||||
|
||||
void set_selection_item (Scene_polyhedron_selection_item* selection)
|
||||
{
|
||||
m_selection = selection;
|
||||
|
|
@ -210,6 +207,8 @@ protected:
|
|||
Mesh::Property_map<face_descriptor, CGAL::Color> m_color;
|
||||
Mesh::Property_map<face_descriptor, CGAL::Color> m_real_color;
|
||||
|
||||
std::vector<std::vector<float> > m_label_probabilities;
|
||||
|
||||
Generator* m_generator;
|
||||
int m_index_color;
|
||||
};
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
include( polyhedron_demo_macros )
|
||||
|
||||
polyhedron_demo_plugin(convex_hull_plugin Convex_hull_plugin)
|
||||
polyhedron_demo_plugin(convex_hull_plugin Convex_hull_plugin KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(convex_hull_plugin PUBLIC scene_points_with_normal_item scene_polylines_item scene_selection_item scene_surface_mesh_item)
|
||||
|
||||
polyhedron_demo_plugin(kernel_plugin Kernel_plugin)
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ target_link_libraries(io_implicit_function_plugin PUBLIC scene_implicit_function
|
|||
polyhedron_demo_plugin(nef_io_plugin Nef_io_plugin KEYWORDS IO)
|
||||
target_link_libraries(nef_io_plugin PUBLIC scene_nef_polyhedron_item)
|
||||
|
||||
polyhedron_demo_plugin(off_plugin OFF_io_plugin KEYWORDS IO Mesh_3)
|
||||
polyhedron_demo_plugin(off_plugin OFF_io_plugin KEYWORDS IO Mesh_3 PointSetProcessing Classification)
|
||||
target_link_libraries(off_plugin PUBLIC scene_polygon_soup_item scene_points_with_normal_item scene_surface_mesh_item)
|
||||
|
||||
polyhedron_demo_plugin(off_to_nef_plugin OFF_to_nef_io_plugin KEYWORDS IO)
|
||||
|
|
@ -48,7 +48,7 @@ if (VTK_FOUND)
|
|||
else()
|
||||
message(STATUS "NOTICE : the vtk IO plugin needs VTK 6.0 or greater and will not be compiled.")
|
||||
endif()
|
||||
polyhedron_demo_plugin(xyz_plugin XYZ_io_plugin KEYWORDS IO)
|
||||
polyhedron_demo_plugin(xyz_plugin XYZ_io_plugin KEYWORDS IO PointSetProcessing Classification)
|
||||
target_link_libraries(xyz_plugin PUBLIC scene_points_with_normal_item)
|
||||
|
||||
list(FIND CMAKE_CXX_COMPILE_FEATURES cxx_rvalue_references has_cxx_rvalues)
|
||||
|
|
@ -59,12 +59,12 @@ if(has_cxx_rvalues LESS 0 OR has_cxx_variadic LESS 0)
|
|||
else()
|
||||
set(needed_cxx_features cxx_rvalue_references cxx_variadic_templates)
|
||||
|
||||
polyhedron_demo_plugin(ply_plugin PLY_io_plugin KEYWORDS IO)
|
||||
polyhedron_demo_plugin(ply_plugin PLY_io_plugin KEYWORDS IO PointSetProcessing Classification)
|
||||
target_link_libraries(ply_plugin PUBLIC scene_points_with_normal_item scene_polygon_soup_item scene_surface_mesh_item scene_polygon_soup_item)
|
||||
target_compile_features(ply_plugin PRIVATE ${needed_cxx_features})
|
||||
|
||||
if (LASLIB_FOUND)
|
||||
polyhedron_demo_plugin(las_plugin LAS_io_plugin KEYWORDS IO)
|
||||
polyhedron_demo_plugin(las_plugin LAS_io_plugin KEYWORDS IO PointSetProcessing Classification)
|
||||
target_link_libraries(las_plugin PUBLIC scene_points_with_normal_item ${LASLIB_LIBRARIES})
|
||||
target_compile_features(las_plugin PRIVATE ${needed_cxx_features})
|
||||
else()
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
include( polyhedron_demo_macros )
|
||||
polyhedron_demo_plugin(pca_plugin Pca_plugin)
|
||||
polyhedron_demo_plugin(pca_plugin Pca_plugin KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(pca_plugin PUBLIC scene_surface_mesh_item scene_points_with_normal_item scene_basic_objects)
|
||||
|
||||
qt5_wrap_ui( transformUI_FILES Transformation_widget.ui MeshOnGrid_dialog.ui)
|
||||
|
||||
polyhedron_demo_plugin(affine_transform_plugin Affine_transform_plugin ${transformUI_FILES})
|
||||
polyhedron_demo_plugin(affine_transform_plugin Affine_transform_plugin ${transformUI_FILES} KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(affine_transform_plugin PUBLIC scene_surface_mesh_item scene_transform_item scene_points_with_normal_item)
|
||||
|
||||
polyhedron_demo_plugin(edit_box_plugin Edit_box_plugin)
|
||||
|
|
@ -18,5 +18,5 @@ polyhedron_demo_plugin(create_bbox_mesh_plugin Create_bbox_mesh_plugin)
|
|||
target_link_libraries(create_bbox_mesh_plugin PUBLIC scene_surface_mesh_item)
|
||||
|
||||
qt5_wrap_ui( volumesUI_FILES Basic_generator_widget.ui)
|
||||
polyhedron_demo_plugin(basic_generator_plugin Basic_generator_plugin ${volumesUI_FILES} KEYWORDS PolygonMesh)
|
||||
polyhedron_demo_plugin(basic_generator_plugin Basic_generator_plugin ${volumesUI_FILES} KEYWORDS PolygonMesh PointSetProcessing)
|
||||
target_link_libraries(basic_generator_plugin PUBLIC scene_surface_mesh_item scene_points_with_normal_item scene_polylines_item)
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ else()
|
|||
endif()
|
||||
|
||||
|
||||
polyhedron_demo_plugin(orient_soup_plugin Orient_soup_plugin)
|
||||
polyhedron_demo_plugin(orient_soup_plugin Orient_soup_plugin KEYWORDS Classification)
|
||||
target_link_libraries(orient_soup_plugin PUBLIC scene_polygon_soup_item scene_surface_mesh_item scene_polylines_item scene_points_with_normal_item)
|
||||
|
||||
|
||||
|
|
@ -64,7 +64,7 @@ polyhedron_demo_plugin(polyhedron_stitching_plugin Polyhedron_stitching_plugin)
|
|||
target_link_libraries(polyhedron_stitching_plugin PUBLIC scene_surface_mesh_item scene_polylines_item)
|
||||
|
||||
qt5_wrap_ui( selectionUI_FILES Selection_widget.ui)
|
||||
polyhedron_demo_plugin(selection_plugin Selection_plugin ${selectionUI_FILES} KEYWORDS PolygonMesh IO)
|
||||
polyhedron_demo_plugin(selection_plugin Selection_plugin ${selectionUI_FILES} KEYWORDS PolygonMesh IO Classification)
|
||||
target_link_libraries(selection_plugin PUBLIC scene_selection_item scene_points_with_normal_item scene_polylines_item)
|
||||
|
||||
polyhedron_demo_plugin(self_intersection_plugin Self_intersection_plugin)
|
||||
|
|
|
|||
|
|
@ -127,6 +127,7 @@ public:
|
|||
actionSelection = new QAction(
|
||||
QString("Surface Mesh Selection")
|
||||
, mw);
|
||||
actionSelection->setObjectName("actionSelection");
|
||||
connect(actionSelection, SIGNAL(triggered()), this, SLOT(selection_action()));
|
||||
last_mode = 0;
|
||||
dock_widget = new QDockWidget(
|
||||
|
|
|
|||
|
|
@ -1,21 +1,21 @@
|
|||
include( polyhedron_demo_macros )
|
||||
if(EIGEN3_FOUND)
|
||||
qt5_wrap_ui( surface_reconstructionUI_FILES Surface_reconstruction_plugin.ui)
|
||||
polyhedron_demo_plugin(surface_reconstruction_plugin Surface_reconstruction_plugin Surface_reconstruction_plugin_impl ${surface_reconstructionUI_FILES})
|
||||
polyhedron_demo_plugin(surface_reconstruction_plugin Surface_reconstruction_plugin Surface_reconstruction_plugin_impl ${surface_reconstructionUI_FILES} KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(surface_reconstruction_plugin PUBLIC scene_polygon_soup_item scene_surface_mesh_item scene_points_with_normal_item)
|
||||
|
||||
qt5_wrap_ui( point_set_normal_estimationUI_FILES Point_set_normal_estimation_plugin.ui)
|
||||
polyhedron_demo_plugin(point_set_normal_estimation_plugin Point_set_normal_estimation_plugin ${point_set_normal_estimationUI_FILES})
|
||||
polyhedron_demo_plugin(point_set_normal_estimation_plugin Point_set_normal_estimation_plugin ${point_set_normal_estimationUI_FILES} KEYWORDS PointSetProcessing Classification)
|
||||
target_link_libraries(point_set_normal_estimation_plugin PUBLIC scene_points_with_normal_item scene_callback_signaler)
|
||||
|
||||
qt5_wrap_ui( features_detection_pluginUI_FILES Features_detection_plugin.ui)
|
||||
polyhedron_demo_plugin(features_detection_plugin Features_detection_plugin ${features_detection_pluginUI_FILES})
|
||||
polyhedron_demo_plugin(features_detection_plugin Features_detection_plugin ${features_detection_pluginUI_FILES} KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(features_detection_plugin PUBLIC scene_points_with_normal_item)
|
||||
|
||||
polyhedron_demo_plugin(point_set_smoothing_plugin Point_set_smoothing_plugin)
|
||||
polyhedron_demo_plugin(point_set_smoothing_plugin Point_set_smoothing_plugin KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(point_set_smoothing_plugin PUBLIC scene_points_with_normal_item scene_callback_signaler)
|
||||
|
||||
polyhedron_demo_plugin(point_set_average_spacing_plugin Point_set_average_spacing_plugin)
|
||||
polyhedron_demo_plugin(point_set_average_spacing_plugin Point_set_average_spacing_plugin KEYWORDS PointSetProcessing Classification)
|
||||
target_link_libraries(point_set_average_spacing_plugin PUBLIC scene_points_with_normal_item scene_callback_signaler)
|
||||
|
||||
else(EIGEN3_FOUND)
|
||||
|
|
@ -28,45 +28,45 @@ else(EIGEN3_FOUND)
|
|||
endif()
|
||||
|
||||
qt5_wrap_ui(point_set_bilateral_smoothingUI_FILES Point_set_bilateral_smoothing_plugin.ui)
|
||||
polyhedron_demo_plugin(point_set_bilateral_smoothing_plugin Point_set_bilateral_smoothing_plugin ${point_set_bilateral_smoothingUI_FILES})
|
||||
polyhedron_demo_plugin(point_set_bilateral_smoothing_plugin Point_set_bilateral_smoothing_plugin ${point_set_bilateral_smoothingUI_FILES} KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(point_set_bilateral_smoothing_plugin PUBLIC scene_points_with_normal_item scene_callback_signaler)
|
||||
|
||||
qt5_wrap_ui( ps_outliers_removal_UI_FILES Point_set_outliers_removal_plugin.ui)
|
||||
polyhedron_demo_plugin(point_set_outliers_removal_plugin Point_set_outliers_removal_plugin ${ps_outliers_removal_UI_FILES})
|
||||
polyhedron_demo_plugin(point_set_outliers_removal_plugin Point_set_outliers_removal_plugin ${ps_outliers_removal_UI_FILES} KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(point_set_outliers_removal_plugin PUBLIC scene_points_with_normal_item scene_callback_signaler)
|
||||
|
||||
qt5_wrap_ui( point_set_selectionUI_FILES Point_set_selection_widget.ui)
|
||||
polyhedron_demo_plugin(point_set_selection_plugin Point_set_selection_plugin ${point_set_selectionUI_FILES})
|
||||
polyhedron_demo_plugin(point_set_selection_plugin Point_set_selection_plugin ${point_set_selectionUI_FILES} KEYWORDS PointSetProcessing Classification)
|
||||
target_link_libraries(point_set_selection_plugin PUBLIC scene_points_with_normal_item scene_polylines_item scene_edit_box_item)
|
||||
|
||||
qt5_wrap_ui(point_set_shape_detectionUI_FILES Point_set_shape_detection_plugin.ui)
|
||||
polyhedron_demo_plugin(point_set_shape_detection_plugin Point_set_shape_detection_plugin ${point_set_shape_detectionUI_FILES})
|
||||
polyhedron_demo_plugin(point_set_shape_detection_plugin Point_set_shape_detection_plugin ${point_set_shape_detectionUI_FILES} KEYWORDS PointSetProcessing Classification)
|
||||
target_link_libraries(point_set_shape_detection_plugin PUBLIC scene_surface_mesh_item scene_points_with_normal_item scene_polygon_soup_item scene_callback_signaler)
|
||||
|
||||
qt5_wrap_ui(point_set_simplificationUI_FILES Point_set_simplification_plugin.ui)
|
||||
polyhedron_demo_plugin(point_set_simplification_plugin Point_set_simplification_plugin ${point_set_simplificationUI_FILES})
|
||||
polyhedron_demo_plugin(point_set_simplification_plugin Point_set_simplification_plugin ${point_set_simplificationUI_FILES} KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(point_set_simplification_plugin PUBLIC scene_points_with_normal_item scene_callback_signaler)
|
||||
|
||||
qt5_wrap_ui(point_set_upsamplingUI_FILES Point_set_upsampling_plugin.ui)
|
||||
polyhedron_demo_plugin(point_set_upsampling_plugin Point_set_upsampling_plugin ${point_set_upsamplingUI_FILES})
|
||||
polyhedron_demo_plugin(point_set_upsampling_plugin Point_set_upsampling_plugin ${point_set_upsamplingUI_FILES} KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(point_set_upsampling_plugin PUBLIC scene_points_with_normal_item)
|
||||
|
||||
qt5_wrap_ui(point_set_wlopFILES Point_set_wlop_plugin.ui)
|
||||
polyhedron_demo_plugin(point_set_wlop_plugin Point_set_wlop_plugin ${point_set_wlopFILES})
|
||||
polyhedron_demo_plugin(point_set_wlop_plugin Point_set_wlop_plugin ${point_set_wlopFILES} KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(point_set_wlop_plugin PUBLIC scene_points_with_normal_item scene_callback_signaler)
|
||||
|
||||
polyhedron_demo_plugin(merge_point_sets_plugin Merge_point_sets_plugin)
|
||||
polyhedron_demo_plugin(merge_point_sets_plugin Merge_point_sets_plugin KEYWORDS PointSetProcessing Classification)
|
||||
target_link_libraries(merge_point_sets_plugin PUBLIC scene_points_with_normal_item)
|
||||
|
||||
polyhedron_demo_plugin(point_set_interference_plugin Point_set_interference_plugin)
|
||||
polyhedron_demo_plugin(point_set_interference_plugin Point_set_interference_plugin KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(point_set_interference_plugin PUBLIC scene_points_with_normal_item)
|
||||
|
||||
qt5_wrap_ui( alpha_shapeUI_FILES Alpha_shape_widget.ui )
|
||||
polyhedron_demo_plugin(alpha_shape_plugin Alpha_shape_plugin ${alpha_shapeUI_FILES})
|
||||
polyhedron_demo_plugin(alpha_shape_plugin Alpha_shape_plugin ${alpha_shapeUI_FILES} KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(alpha_shape_plugin PUBLIC scene_points_with_normal_item scene_c3t3_item)
|
||||
|
||||
qt5_wrap_ui( distanceUI_FILES Point_set_to_mesh_distance_widget.ui )
|
||||
polyhedron_demo_plugin(point_set_to_mesh_distance_plugin Point_set_to_mesh_distance_plugin ${distanceUI_FILES})
|
||||
polyhedron_demo_plugin(point_set_to_mesh_distance_plugin Point_set_to_mesh_distance_plugin ${distanceUI_FILES} KEYWORDS PointSetProcessing)
|
||||
target_link_libraries(point_set_to_mesh_distance_plugin PUBLIC scene_points_with_normal_item scene_surface_mesh_item scene_color_ramp)
|
||||
|
||||
if(TBB_FOUND)
|
||||
|
|
|
|||
|
|
@ -427,6 +427,7 @@ public:
|
|||
rg_cluster_epsilon = -1;
|
||||
rg_normal_threshold = 20;
|
||||
actionPointSetSelection = new QAction(tr("Selection"), mw);
|
||||
actionPointSetSelection->setObjectName("actionPointSetSelection");
|
||||
connect(actionPointSetSelection, SIGNAL(triggered()), this, SLOT(selection_action()));
|
||||
|
||||
dock_widget = new QDockWidget("Point Set Selection", mw);
|
||||
|
|
|
|||
|
|
@ -268,7 +268,19 @@ public:
|
|||
{
|
||||
return (m_blue != Byte_map());
|
||||
}
|
||||
|
||||
bool add_colors ()
|
||||
{
|
||||
if (has_colors())
|
||||
return false;
|
||||
|
||||
m_red = this->template add_property_map<unsigned char>("red", 0).first;
|
||||
m_green = this->template add_property_map<unsigned char>("green", 0).first;
|
||||
m_blue = this->template add_property_map<unsigned char>("blue", 0).first;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
void remove_colors()
|
||||
{
|
||||
if (m_blue != Byte_map())
|
||||
|
|
@ -284,20 +296,35 @@ public:
|
|||
this->template remove_property_map<double>(m_fblue);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
double red (const Index& index) const
|
||||
{ return (m_red == Byte_map()) ? m_fred[index] : double(m_red[index]) / 255.; }
|
||||
double green (const Index& index) const
|
||||
{ return (m_green == Byte_map()) ? m_fgreen[index] : double(m_green[index]) / 255.; }
|
||||
double blue (const Index& index) const
|
||||
{ return (m_blue == Byte_map()) ? m_fblue[index] : double(m_blue[index]) / 255.; }
|
||||
void set_color (const Index& index, unsigned char r, unsigned char g, unsigned char b)
|
||||
|
||||
void set_color (const Index& index, unsigned char r = 0, unsigned char g = 0, unsigned char b = 0)
|
||||
{
|
||||
m_red[index] = r;
|
||||
m_green[index] = g;
|
||||
m_blue[index] = b;
|
||||
}
|
||||
|
||||
void set_color (const Index& index, const QColor& color)
|
||||
{
|
||||
m_red[index] = color.red();
|
||||
m_green[index] = color.green();
|
||||
m_blue[index] = color.blue();
|
||||
}
|
||||
|
||||
template <typename ColorRange>
|
||||
void set_color (const Index& index, const ColorRange& color)
|
||||
{
|
||||
m_red[index] = color[0];
|
||||
m_green[index] = color[1];
|
||||
m_blue[index] = color[2];
|
||||
}
|
||||
|
||||
|
||||
iterator first_selected() { return this->m_indices.end() - this->m_nb_removed; }
|
||||
|
|
|
|||
|
|
@ -15,6 +15,9 @@ class QMultipleInputDialog
|
|||
{
|
||||
QDialog* dialog;
|
||||
QFormLayout* form;
|
||||
std::map<std::string, QWidget*> map_widgets;
|
||||
|
||||
|
||||
public:
|
||||
QMultipleInputDialog (const char* name, QWidget* parent)
|
||||
{
|
||||
|
|
@ -28,7 +31,7 @@ public:
|
|||
}
|
||||
|
||||
template <typename QObjectType>
|
||||
QObjectType* add (const char* name)
|
||||
QObjectType* add (const char* name, const char* key = NULL)
|
||||
{
|
||||
QObjectType* out = NULL;
|
||||
|
||||
|
|
@ -43,9 +46,24 @@ public:
|
|||
form->addRow (QString(name), out);
|
||||
}
|
||||
|
||||
if (key != NULL)
|
||||
map_widgets.insert (std::make_pair (key, out));
|
||||
|
||||
return out;
|
||||
}
|
||||
|
||||
template <typename QObjectType>
|
||||
const QObjectType* get (const char* key) const
|
||||
{
|
||||
typename std::map<std::string, QWidget*>::const_iterator
|
||||
found = map_widgets.find (key);
|
||||
if (found == map_widgets.end())
|
||||
return NULL;
|
||||
|
||||
QWidget* widget_out = found->second;
|
||||
return qobject_cast<QObjectType*>(widget_out);
|
||||
}
|
||||
|
||||
int exec()
|
||||
{
|
||||
QDialogButtonBox* oknotok = new QDialogButtonBox
|
||||
|
|
@ -58,6 +76,16 @@ public:
|
|||
|
||||
return dialog->exec();
|
||||
}
|
||||
|
||||
void exec_no_cancel()
|
||||
{
|
||||
QDialogButtonBox* ok = new QDialogButtonBox
|
||||
(QDialogButtonBox::Ok, Qt::Horizontal, dialog);
|
||||
|
||||
form->addRow (ok);
|
||||
QObject::connect (ok, SIGNAL(accepted()), dialog, SLOT(accept()));
|
||||
dialog->exec();
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue