Skip to main content

Point cloud toolkit

Project description

Build Status Codacy Badge Coverage Status DOI Documentation Status

Toolkit for handling point clouds created using airborne laser scanning (ALS). Find neighboring points in your point cloud and describe them as feature values. Read our user manual and our (very modest) tutorial.

Included features:

  • band_ratio_1<normalized_height<2
  • band_ratio_2<normalized_height<3
  • band_ratio_3<normalized_height
  • band_ratio_normalized_height<1
  • coeff_var_norm_z
  • coeff_var_z
  • density_absolute_mean_norm_z
  • density_absolute_mean_z
  • echo_ratio
  • eigenv_1
  • eigenv_2
  • eigenv_3
  • entropy_norm_z
  • entropy_z
  • kurto_norm_z
  • kurto_z
  • max_norm_z
  • max_z
  • mean_norm_z
  • mean_z
  • median_norm_z
  • median_z
  • min_norm_z
  • min_z
  • normal_vector_1
  • normal_vector_2
  • normal_vector_3
  • perc_1_normalized_height until perc_100_normalized_height
  • perc_1_z until perc_100_z
  • point_density
  • pulse_penetration_ratio
  • range_norm_z
  • range_z
  • sigma_z
  • skew_norm_z
  • skew_z
  • slope
  • std_norm_z
  • std_z
  • var_norm_z
  • var_z'

Feature testing

All features were tested for the following general conditions:

  • Output consistent point clouds and don't crash with artificial data, real data, all zero data (x, y or z), data without points, data with very low number of neighbors (0, 1, 2)
  • Input should not be changed by the feature extractor

The specific features were tested as follows.

Echo ratio

A test was written with artificial data to check the correctness of the calculation with manually calculated ratio. Also tested on real data to make sure it doesn't crash, without checking for correctness. We could add a test for correctness with real data but we would need both that data and a verified ground truth.


Only sanity tests (l1>l2>l3) on real data and corner cases but no actual test for correctness. The code is very simple though and mainly calls numpy.linalg.eig.

Height statistics (max_z','min_z','mean_z','median_z','std_z','var_z','coeff_var_z','skew_z','kurto_z)

Tested on real data for correctness. It is however unclear where the ground truths come from. Code is mainly calling numpy methods that do all the work already. Only calculations in our code are:

range_z = max_z - min_z
coeff_var_z = np.std(z) / np.mean(z)

I don't know about any packages that could provide an out of the box coefficient of variance. This is probably because the calculation is so simple.

Pulse penetration ratio

Tested for correctness using artificial data against manually calculated values. No comparison was made with other implementations.


Tested for correctness using artificial data against manually calculated values. No comparison was made with other implementations.


Tested for correctness using a simple case with artificial data against manually calculated values.


Tested for correctness on artificial data.

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for laserchicken, version 0.3.1
Filename, size File type Python version Upload date Hashes
Filename, size laserchicken-0.3.1-py3-none-any.whl (34.1 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size laserchicken-0.3.1.tar.gz (22.7 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring DigiCert DigiCert EV certificate Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page