Package | Description |
---|---|
de.lmu.ifi.dbs.elki.algorithm.clustering.correlation |
Correlation clustering algorithms
|
de.lmu.ifi.dbs.elki.algorithm.clustering.gdbscan |
Generalized DBSCAN.
|
de.lmu.ifi.dbs.elki.algorithm.clustering.hierarchical |
Hierarchical agglomerative clustering (HAC).
|
de.lmu.ifi.dbs.elki.algorithm.clustering.kmeans |
K-means clustering and variations.
|
de.lmu.ifi.dbs.elki.algorithm.clustering.kmeans.initialization |
Initialization strategies for k-means.
|
de.lmu.ifi.dbs.elki.algorithm.clustering.optics |
OPTICS family of clustering algorithms.
|
de.lmu.ifi.dbs.elki.algorithm.clustering.subspace |
Axis-parallel subspace clustering algorithms
The clustering algorithms in this package are instances of both, projected clustering algorithms or
subspace clustering algorithms according to the classical but somewhat obsolete classification schema
of clustering algorithms for axis-parallel subspaces.
|
de.lmu.ifi.dbs.elki.algorithm.outlier |
Outlier detection algorithms
|
de.lmu.ifi.dbs.elki.algorithm.outlier.distance |
Distance-based outlier detection algorithms, such as DBOutlier and kNN.
|
de.lmu.ifi.dbs.elki.algorithm.outlier.lof |
LOF family of outlier detection algorithms.
|
de.lmu.ifi.dbs.elki.database.datastore |
General data store layer API (along the lines of
Map<DBID, T> - use everywhere!) |
de.lmu.ifi.dbs.elki.database.datastore.memory |
Memory data store implementation for ELKI.
|
de.lmu.ifi.dbs.elki.index.invertedlist |
Indexes using inverted lists.
|
de.lmu.ifi.dbs.elki.parallel.processor |
Processor API of ELKI, and some essential shared processors.
|
Modifier and Type | Field and Description |
---|---|
private WritableDoubleDataStore |
HiCO.Instance.tmpDistance
Temporary storage of distances.
|
Modifier and Type | Method and Description |
---|---|
private void |
LSDBC.fillDensities(KNNQuery<O> knnq,
DBIDs ids,
WritableDoubleDataStore dens)
Collect all densities into an array for sorting.
|
private boolean |
LSDBC.isLocalMaximum(double kdist,
DBIDs neighbors,
WritableDoubleDataStore kdists)
Test if a point is a local density maximum.
|
Modifier and Type | Method and Description |
---|---|
protected WritableDoubleDataStore |
AbstractHDBSCAN.computeCoreDists(DBIDs ids,
KNNQuery<O> knnQ,
int minPts)
Compute the core distances for all objects.
|
Modifier and Type | Method and Description |
---|---|
private void |
CLINK.clinkstep3(DBIDRef id,
DBIDArrayIter i,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
Third step: Determine the values for P and L
|
private void |
CLINK.clinkstep4567(DBIDRef id,
ArrayDBIDs ids,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
Fourth to seventh step of CLINK: find best insertion
|
private void |
CLINK.clinkstep8(DBIDRef id,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
Update hierarchy.
|
protected void |
AbstractHDBSCAN.convertToPointerRepresentation(ArrayDBIDs ids,
DoubleLongHeap heap,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda)
Convert spanning tree to a pointer representation.
|
protected int |
AnderbergHierarchicalClustering.findMerge(int size,
double[] scratch,
DBIDArrayIter ix,
DBIDArrayIter iy,
double[] bestd,
int[] besti,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableIntegerDataStore csize)
Perform the next merge step.
|
protected int |
AGNES.findMerge(int size,
double[] scratch,
DBIDArrayIter ix,
DBIDArrayIter iy,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableIntegerDataStore csize)
Perform the next merge step in AGNES.
|
protected void |
AnderbergHierarchicalClustering.merge(int size,
double[] scratch,
DBIDArrayIter ix,
DBIDArrayIter iy,
double[] bestd,
int[] besti,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableIntegerDataStore csize,
double mindist,
int x,
int y)
Execute the cluster merge.
|
protected void |
AGNES.merge(int size,
double[] scratch,
DBIDArrayIter ix,
DBIDArrayIter iy,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableIntegerDataStore csize,
double mindist,
int x,
int y)
Execute the cluster merge.
|
protected void |
SLINK.process(DBIDRef id,
ArrayDBIDs ids,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
SLINK main loop.
|
protected void |
CLINK.process(DBIDRef id,
ArrayDBIDs ids,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
CLINK main loop, based on the SLINK main loop.
|
private void |
SLINK.slinkstep3(DBIDRef id,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
Third step: Determine the values for P and L
|
private void |
SLINK.slinkstep4(DBIDRef id,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda)
Fourth step: Actualize the clusters if necessary
|
private void |
SLINKHDBSCANLinearMemory.step1(DBIDRef id,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda)
First step: Initialize P(id) = id, L(id) = infinity.
|
private void |
SLINK.step2(DBIDRef id,
DBIDArrayIter it,
int n,
DistanceQuery<? super O> distQuery,
WritableDoubleDataStore m)
Second step: Determine the pairwise distances from all objects in the
pointer representation to the new object with the specified id.
|
private void |
SLINKHDBSCANLinearMemory.step2(DBIDRef id,
DBIDs processedIDs,
DistanceQuery<? super O> distQuery,
DoubleDataStore coredists,
WritableDoubleDataStore m)
Second step: Determine the pairwise distances from all objects in the
pointer representation to the new object with the specified id.
|
private void |
SLINK.step2primitive(DBIDRef id,
DBIDArrayIter it,
int n,
Relation<? extends O> relation,
PrimitiveDistanceFunction<? super O> distFunc,
WritableDoubleDataStore m)
Second step: Determine the pairwise distances from all objects in the
pointer representation to the new object with the specified id.
|
private void |
SLINKHDBSCANLinearMemory.step3(DBIDRef id,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
DBIDs processedIDs,
WritableDoubleDataStore m)
Third step: Determine the values for P and L
|
private void |
SLINKHDBSCANLinearMemory.step4(DBIDRef id,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
DBIDs processedIDs)
Fourth step: Actualize the clusters if necessary
|
protected void |
AnderbergHierarchicalClustering.updateMatrix(int size,
double[] scratch,
DBIDArrayIter ij,
double[] bestd,
int[] besti,
WritableDoubleDataStore lambda,
WritableIntegerDataStore csize,
double mindist,
int x,
int y,
int sizex,
int sizey)
Update the scratch distance matrix.
|
protected void |
AGNES.updateMatrix(int size,
double[] scratch,
DBIDArrayIter ij,
WritableDoubleDataStore lambda,
WritableIntegerDataStore csize,
double mindist,
int x,
int y,
int sizex,
int sizey)
Update the scratch distance matrix.
|
Modifier and Type | Method and Description |
---|---|
protected double |
KMedoidsPAM.assignToNearestCluster(ArrayDBIDs means,
DBIDs ids,
WritableDoubleDataStore nearest,
WritableDoubleDataStore second,
WritableIntegerDataStore assignment,
DistanceQuery<V> distQ)
Returns a list of clusters.
|
private int |
KMeansElkan.assignToNearestCluster(Relation<V> relation,
List<Vector> means,
List<Vector> sums,
List<ModifiableDBIDs> clusters,
WritableIntegerDataStore assignment,
double[] sep,
double[][] cdist,
WritableDoubleDataStore upper,
WritableDataStore<double[]> lower)
Reassign objects, but only if their bounds indicate it is necessary to do
so.
|
private int |
KMeansHamerly.assignToNearestCluster(Relation<V> relation,
List<Vector> means,
List<Vector> sums,
List<ModifiableDBIDs> clusters,
WritableIntegerDataStore assignment,
double[] sep,
WritableDoubleDataStore upper,
WritableDoubleDataStore lower)
Reassign objects, but only if their bounds indicate it is necessary to do
so.
|
private int |
KMeansElkan.initialAssignToNearestCluster(Relation<V> relation,
List<Vector> means,
List<Vector> sums,
List<ModifiableDBIDs> clusters,
WritableIntegerDataStore assignment,
WritableDoubleDataStore upper,
WritableDataStore<double[]> lower)
Reassign objects, but only if their bounds indicate it is necessary to do
so.
|
private int |
KMeansHamerly.initialAssignToNearestCluster(Relation<V> relation,
List<Vector> means,
List<Vector> sums,
List<ModifiableDBIDs> clusters,
WritableIntegerDataStore assignment,
WritableDoubleDataStore upper,
WritableDoubleDataStore lower)
Reassign objects, but only if their bounds indicate it is necessary to do
so.
|
private void |
KMeansElkan.updateBounds(Relation<V> relation,
WritableIntegerDataStore assignment,
WritableDoubleDataStore upper,
WritableDataStore<double[]> lower,
double[] move)
Update the bounds for k-means.
|
private void |
KMeansHamerly.updateBounds(Relation<V> relation,
WritableIntegerDataStore assignment,
WritableDoubleDataStore upper,
WritableDoubleDataStore lower,
double[] move,
double delta)
Update the bounds for k-means.
|
Modifier and Type | Method and Description |
---|---|
protected <T> double |
KMeansPlusPlusInitialMeans.initialWeights(WritableDoubleDataStore weights,
DBIDs ids,
T latest,
DistanceQuery<? super T> distQ)
Initialize the weight list.
|
protected <T> double |
KMeansPlusPlusInitialMeans.updateWeights(WritableDoubleDataStore weights,
DBIDs ids,
T latest,
DistanceQuery<? super T> distQ)
Update the weight list.
|
Modifier and Type | Field and Description |
---|---|
(package private) WritableDoubleDataStore |
ClusterOrder.reachability
Reachability storage.
|
(package private) WritableDoubleDataStore |
OPTICSList.Instance.reachability
Reachability storage.
|
protected WritableDoubleDataStore |
GeneralizedOPTICS.Instance.reachability
Reachability storage.
|
(package private) WritableDoubleDataStore |
FastOPTICS.reachDist
Result: reachability distances
|
Constructor and Description |
---|
ClusterOrder(String name,
String shortname,
ArrayModifiableDBIDs ids,
WritableDoubleDataStore reachability,
WritableDBIDDataStore predecessor)
Constructor
|
CorrelationClusterOrder(String name,
String shortname,
ArrayModifiableDBIDs ids,
WritableDoubleDataStore reachability,
WritableDBIDDataStore predecessor,
WritableIntegerDataStore corrdim)
Constructor.
|
Modifier and Type | Field and Description |
---|---|
private WritableDoubleDataStore |
DiSH.Instance.tmpDistance
Temporary storage of distances.
|
Constructor and Description |
---|
DiSHClusterOrder(String name,
String shortname,
ArrayModifiableDBIDs ids,
WritableDoubleDataStore reachability,
WritableDBIDDataStore predecessor,
WritableIntegerDataStore corrdim,
WritableDataStore<long[]> commonPreferenceVectors)
Constructor.
|
Modifier and Type | Method and Description |
---|---|
private void |
DWOF.clusterData(DBIDs ids,
RangeQuery<O> rnnQuery,
WritableDoubleDataStore radii,
WritableDataStore<ModifiableDBIDs> labels)
This method applies a density based clustering algorithm.
|
private void |
DWOF.initializeRadii(DBIDs ids,
KNNQuery<O> knnq,
DistanceQuery<O> distFunc,
WritableDoubleDataStore radii)
This method prepares a container for the radii of the objects and
initializes radii according to the equation:
initialRadii of a certain object = (absoluteMinDist of all objects) *
(avgDist of the object) / (minAvgDist of all objects)
|
Modifier and Type | Method and Description |
---|---|
protected void |
ReferenceBasedOutlierDetection.updateDensities(WritableDoubleDataStore rbod_score,
DoubleDBIDList referenceDists)
Update the density estimates for each object.
|
Modifier and Type | Field and Description |
---|---|
private WritableDoubleDataStore |
FlexibleLOF.LOFResult.lofs
The LOF values of the objects.
|
private WritableDoubleDataStore |
FlexibleLOF.LOFResult.lrds
The LRD values of the objects.
|
Modifier and Type | Method and Description |
---|---|
WritableDoubleDataStore |
FlexibleLOF.LOFResult.getLofs()
Get the LOF data store.
|
WritableDoubleDataStore |
FlexibleLOF.LOFResult.getLrds()
Get the LRD data store.
|
Modifier and Type | Method and Description |
---|---|
protected void |
COF.computeAverageChainingDistances(KNNQuery<O> knnq,
DistanceQuery<O> dq,
DBIDs ids,
WritableDoubleDataStore acds)
Computes the average chaining distance, the average length of a path
through the given set of points to each target.
|
private void |
COF.computeCOFScores(KNNQuery<O> knnq,
DBIDs ids,
DoubleDataStore acds,
WritableDoubleDataStore cofs,
DoubleMinMax cofminmax)
Compute Connectivity outlier factors.
|
protected void |
INFLO.computeINFLO(Relation<O> relation,
ModifiableDBIDs pruned,
WritableDataStore<ModifiableDBIDs> knns,
WritableDataStore<ModifiableDBIDs> rnns,
WritableDoubleDataStore density,
WritableDoubleDataStore inflos,
DoubleMinMax inflominmax)
Compute the final INFLO scores.
|
protected void |
FlexibleLOF.computeLOFs(KNNQuery<O> knnq,
DBIDs ids,
DoubleDataStore lrds,
WritableDoubleDataStore lofs,
DoubleMinMax lofminmax)
Computes the Local outlier factor (LOF) of the specified objects.
|
private void |
LOF.computeLOFScores(KNNQuery<O> knnq,
DBIDs ids,
DoubleDataStore lrds,
WritableDoubleDataStore lofs,
DoubleMinMax lofminmax)
Compute local outlier factors.
|
protected void |
FlexibleLOF.computeLRDs(KNNQuery<O> knnq,
DBIDs ids,
WritableDoubleDataStore lrds)
Computes the local reachability density (LRD) of the specified objects.
|
private void |
LOF.computeLRDs(KNNQuery<O> knnq,
DBIDs ids,
WritableDoubleDataStore lrds)
Compute local reachability distances.
|
protected void |
INFLO.computeNeighborhoods(Relation<O> relation,
KNNQuery<O> knnQuery,
ModifiableDBIDs pruned,
WritableDataStore<ModifiableDBIDs> knns,
WritableDataStore<ModifiableDBIDs> rnns,
WritableDoubleDataStore density)
Compute neighborhoods
|
protected void |
KDEOS.computeOutlierScores(KNNQuery<O> knnq,
DBIDs ids,
WritableDataStore<double[]> densities,
WritableDoubleDataStore kdeos,
DoubleMinMax minmax)
Compute the final KDEOS scores.
|
protected void |
LoOP.computePDists(Relation<O> relation,
KNNQuery<O> knn,
WritableDoubleDataStore pdists)
Compute the probabilistic distances used by LoOP.
|
protected double |
LoOP.computePLOFs(Relation<O> relation,
KNNQuery<O> knn,
WritableDoubleDataStore pdists,
WritableDoubleDataStore plofs)
Compute the LOF values, using the pdist distances.
|
private void |
SimplifiedLOF.computeSimplifiedLOFs(DBIDs ids,
KNNQuery<O> knnq,
WritableDoubleDataStore slrds,
WritableDoubleDataStore lofs,
DoubleMinMax lofminmax)
Compute the simplified LOF factors.
|
private void |
SimplifiedLOF.computeSimplifiedLRDs(DBIDs ids,
KNNQuery<O> knnq,
WritableDoubleDataStore lrds)
Compute the simplified reachability densities.
|
protected DBIDs |
INFLO.getKNN(DBIDIter q,
KNNQuery<O> knnQuery,
WritableDataStore<ModifiableDBIDs> knns,
WritableDoubleDataStore density)
Get the (forward only) kNN of an object, including the query point
|
Constructor and Description |
---|
LOFResult(OutlierResult result,
KNNQuery<O> kNNRefer,
KNNQuery<O> kNNReach,
WritableDoubleDataStore lrds,
WritableDoubleDataStore lofs)
Encapsulates information generated during a run of the
FlexibleLOF algorithm. |
Modifier and Type | Method and Description |
---|---|
static WritableDoubleDataStore |
DataStoreUtil.makeDoubleStorage(DBIDs ids,
int hints)
Make a new storage, to associate the given ids with an object of class
dataclass.
|
WritableDoubleDataStore |
DataStoreFactory.makeDoubleStorage(DBIDs ids,
int hints)
Make a new storage, to associate the given ids with an object of class
dataclass.
|
static WritableDoubleDataStore |
DataStoreUtil.makeDoubleStorage(DBIDs ids,
int hints,
double def)
Make a new storage, to associate the given ids with an object of class
dataclass.
|
WritableDoubleDataStore |
DataStoreFactory.makeDoubleStorage(DBIDs ids,
int hints,
double def)
Make a new storage, to associate the given ids with an object of class
dataclass.
|
Modifier and Type | Class and Description |
---|---|
class |
ArrayDoubleStore
A class to answer representation queries using the stored Array.
|
class |
MapIntegerDBIDDoubleStore
Writable data store for double values.
|
Modifier and Type | Method and Description |
---|---|
WritableDoubleDataStore |
MemoryDataStoreFactory.makeDoubleStorage(DBIDs ids,
int hints) |
WritableDoubleDataStore |
MemoryDataStoreFactory.makeDoubleStorage(DBIDs ids,
int hints,
double def) |
Modifier and Type | Field and Description |
---|---|
(package private) WritableDoubleDataStore |
InMemoryInvertedIndex.length
Length storage.
|
Modifier and Type | Method and Description |
---|---|
private double |
InMemoryInvertedIndex.naiveQuery(V obj,
WritableDoubleDataStore scores,
HashSetModifiableDBIDs cands)
Query the most similar objects, abstract version.
|
private double |
InMemoryInvertedIndex.naiveQueryDense(NumberVector obj,
WritableDoubleDataStore scores,
HashSetModifiableDBIDs cands)
Query the most similar objects, dense version.
|
private double |
InMemoryInvertedIndex.naiveQuerySparse(SparseNumberVector obj,
WritableDoubleDataStore scores,
HashSetModifiableDBIDs cands)
Query the most similar objects, sparse version.
|
Modifier and Type | Field and Description |
---|---|
(package private) WritableDoubleDataStore |
WriteDoubleDataStoreProcessor.store
Store to write to
|
Constructor and Description |
---|
WriteDoubleDataStoreProcessor(WritableDoubleDataStore store)
Constructor.
|
Copyright © 2015 ELKI Development Team, Lehr- und Forschungseinheit für Datenbanksysteme, Ludwig-Maximilians-Universität München. License information.