Package | Description |
---|---|
de.lmu.ifi.dbs.elki.algorithm.clustering.hierarchical |
Hierarchical agglomerative clustering (HAC).
|
de.lmu.ifi.dbs.elki.algorithm.clustering.optics |
OPTICS family of clustering algorithms.
|
de.lmu.ifi.dbs.elki.algorithm.clustering.subspace |
Axis-parallel subspace clustering algorithms
The clustering algorithms in this package are instances of both, projected clustering algorithms or
subspace clustering algorithms according to the classical but somewhat obsolete classification schema
of clustering algorithms for axis-parallel subspaces.
|
de.lmu.ifi.dbs.elki.database.datastore |
General data store layer API (along the lines of
Map<DBID, T> - use everywhere!) |
de.lmu.ifi.dbs.elki.database.datastore.memory |
Memory data store implementation for ELKI.
|
Modifier and Type | Method and Description |
---|---|
private void |
CLINK.clinkstep3(DBIDRef id,
DBIDArrayIter i,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
Third step: Determine the values for P and L
|
private void |
CLINK.clinkstep4567(DBIDRef id,
ArrayDBIDs ids,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
Fourth to seventh step of CLINK: find best insertion
|
private void |
CLINK.clinkstep8(DBIDRef id,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
Update hierarchy.
|
protected void |
AbstractHDBSCAN.convertToPointerRepresentation(ArrayDBIDs ids,
DoubleLongHeap heap,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda)
Convert spanning tree to a pointer representation.
|
protected int |
AnderbergHierarchicalClustering.findMerge(int size,
double[] scratch,
DBIDArrayIter ix,
DBIDArrayIter iy,
double[] bestd,
int[] besti,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableIntegerDataStore csize)
Perform the next merge step.
|
protected int |
AGNES.findMerge(int size,
double[] scratch,
DBIDArrayIter ix,
DBIDArrayIter iy,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableIntegerDataStore csize)
Perform the next merge step in AGNES.
|
protected void |
AnderbergHierarchicalClustering.merge(int size,
double[] scratch,
DBIDArrayIter ix,
DBIDArrayIter iy,
double[] bestd,
int[] besti,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableIntegerDataStore csize,
double mindist,
int x,
int y)
Execute the cluster merge.
|
protected void |
AGNES.merge(int size,
double[] scratch,
DBIDArrayIter ix,
DBIDArrayIter iy,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableIntegerDataStore csize,
double mindist,
int x,
int y)
Execute the cluster merge.
|
protected void |
SLINK.process(DBIDRef id,
ArrayDBIDs ids,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
SLINK main loop.
|
protected void |
CLINK.process(DBIDRef id,
ArrayDBIDs ids,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
CLINK main loop, based on the SLINK main loop.
|
private void |
SLINK.slinkstep3(DBIDRef id,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
WritableDoubleDataStore m)
Third step: Determine the values for P and L
|
private void |
SLINK.slinkstep4(DBIDRef id,
DBIDArrayIter it,
int n,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda)
Fourth step: Actualize the clusters if necessary
|
private void |
SLINKHDBSCANLinearMemory.step1(DBIDRef id,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda)
First step: Initialize P(id) = id, L(id) = infinity.
|
private void |
SLINKHDBSCANLinearMemory.step3(DBIDRef id,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
DBIDs processedIDs,
WritableDoubleDataStore m)
Third step: Determine the values for P and L
|
private void |
SLINKHDBSCANLinearMemory.step4(DBIDRef id,
WritableDBIDDataStore pi,
WritableDoubleDataStore lambda,
DBIDs processedIDs)
Fourth step: Actualize the clusters if necessary
|
Modifier and Type | Field and Description |
---|---|
(package private) WritableDBIDDataStore |
ClusterOrder.predecessor
Predecessor storage.
|
(package private) WritableDBIDDataStore |
OPTICSList.Instance.predecessor
Predecessor storage.
|
protected WritableDBIDDataStore |
GeneralizedOPTICS.Instance.predecessor
Predecessor storage.
|
Constructor and Description |
---|
ClusterOrder(String name,
String shortname,
ArrayModifiableDBIDs ids,
WritableDoubleDataStore reachability,
WritableDBIDDataStore predecessor)
Constructor
|
CorrelationClusterOrder(String name,
String shortname,
ArrayModifiableDBIDs ids,
WritableDoubleDataStore reachability,
WritableDBIDDataStore predecessor,
WritableIntegerDataStore corrdim)
Constructor.
|
Constructor and Description |
---|
DiSHClusterOrder(String name,
String shortname,
ArrayModifiableDBIDs ids,
WritableDoubleDataStore reachability,
WritableDBIDDataStore predecessor,
WritableIntegerDataStore corrdim,
WritableDataStore<long[]> commonPreferenceVectors)
Constructor.
|
Modifier and Type | Method and Description |
---|---|
static WritableDBIDDataStore |
DataStoreUtil.makeDBIDStorage(DBIDs ids,
int hints)
Make a new storage, to associate the given ids with an object of class
dataclass.
|
WritableDBIDDataStore |
DataStoreFactory.makeDBIDStorage(DBIDs ids,
int hints)
Make a new storage, to associate the given ids with an object of class
dataclass.
|
Modifier and Type | Class and Description |
---|---|
class |
ArrayDBIDStore
A class to answer representation queries using the stored Array.
|
class |
MapIntegerDBIDDBIDStore
Writable data store for double values.
|
Modifier and Type | Method and Description |
---|---|
WritableDBIDDataStore |
MemoryDataStoreFactory.makeDBIDStorage(DBIDs ids,
int hints) |
Copyright © 2015 ELKI Development Team, Lehr- und Forschungseinheit für Datenbanksysteme, Ludwig-Maximilians-Universität München. License information.