Welcome to SUOD’s documentation!

Deployment & Documentation & Stats

PyPI version Documentation Status GitHub stars GitHub forks Downloads Downloads

Build Status & Coverage & Maintainability & License

Build Status Circle CI Appveyor Coverage Status License

Background: Outlier detection (OD) is a key data mining task for identifying abnormal objects from general samples with numerous high-stake applications including fraud detection and intrusion detection. Due to the lack of ground truth labels, practitioners often have to build a large number of unsupervised models that are heterogeneous (i.e., different algorithms and hyperparameters) for further combination and analysis with ensemble learning, rather than relying on a single model. However, this yields severe scalability issues on high-dimensional, large datasets.

SUOD (Scalable Unsupervised Outlier Detection) is an acceleration framework for large-scale unsupervised heterogeneous outlier detector training and prediction. It focuses on three complementary aspects to accelerate (dimensionality reduction for high-dimensional data, model approximation for complex models, and execution efficiency improvement for taskload imbalance within distributed systems), while controlling detection performance degradation.

Since its inception in Sep 2019, SUOD has been successfully used in various academic researches and industry applications with more than 700,000 downloads, including PyOD [BZNL19] and IQVIA medical claim analysis.

SUOD Flowchart

SUOD is featured for:

  • Unified APIs, detailed documentation, and examples for the easy use.

  • Optimized performance with JIT and parallelization when possible, using numba and joblib.

  • Fully compatible with the models in PyOD.

  • Customizable modules and flexible design: each module may be turned on/off or totally replaced by custom functions.

Roadmap:

  • Provide more choices of distributed schedulers (adapted for SUOD), e.g., batch sampling, Sparrow (SOSP’13), Pigeon (SoCC’19) etc.

  • Enable the flexibility of selecting data projection methods.


API Demo:

from suod.models.base import SUOD

# initialize a set of base outlier detectors to train and predict on
base_estimators = [
    LOF(n_neighbors=5, contamination=contamination),
    LOF(n_neighbors=15, contamination=contamination),
    LOF(n_neighbors=25, contamination=contamination),
    HBOS(contamination=contamination),
    PCA(contamination=contamination),
    OCSVM(contamination=contamination),
    KNN(n_neighbors=5, contamination=contamination),
    KNN(n_neighbors=15, contamination=contamination),
    KNN(n_neighbors=25, contamination=contamination)]

# initialize a SUOD model with all features turned on
model = SUOD(base_estimators=base_estimators, n_jobs=6,  # number of workers
             rp_flag_global=True,  # global flag for random projection
             bps_flag=True,  # global flag for balanced parallel scheduling
             approx_flag_global=False,  # global flag for model approximation
             contamination=contamination)

model.fit(X_train)  # fit all models with X
model.approximate(X_train)  # conduct model approximation if it is enabled
predicted_labels = model.predict(X_test)  # predict labels
predicted_scores = model.decision_function(X_test)  # predict scores
predicted_probs = model.predict_proba(X_test)  # predict outlying probability

The corresponding paper is published in Conference on Machine Learning Systems (MLSys). See https://mlsys.org/ for more information.

If you use SUOD in a scientific publication, we would appreciate citations to the following paper:

@inproceedings{zhao2021suod,
  title={SUOD: Accelerating Large-scale Unsupervised Heterogeneous Outlier Detection},
  author={Zhao, Yue and Hu, Xiyang and Cheng, Cheng and Wang, Cong and Wan, Changlin and Wang, Wen and Yang, Jianing and Bai, Haoping and Li, Zheng and Xiao, Cao and others},
  journal={Proceedings of Machine Learning and Systems},
  year={2021}
}
Zhao, Y., Hu, X., Cheng, C., Wang, C., Wan, C., Wang, W., Yang, J., Bai, H., Li, Z., Xiao, C. and Wang, Y., 2021. SUOD: Accelerating Large-scale Unsupervised Heterogeneous Outlier Detection. Proceedings of Machine Learning and Systems (MLSys).

Contents: Getting Started

Contents: Documentation


References

JL84

William B Johnson and Joram Lindenstrauss. Extensions of lipschitz mappings into a hilbert space. Contemporary mathematics, 26(189-206):1, 1984.

KKSZ11

Hans-Peter Kriegel, Peer Kroger, Erich Schubert, and Arthur Zimek. Interpreting and unifying outlier scores. In Proceedings of the 2011 SIAM International Conference on Data Mining, 13–24. SIAM, 2011.

ZNL19

Yue Zhao, Zain Nasrullah, and Zheng Li. PyOD: a python toolbox for scalable outlier detection. Journal of Machine Learning Research, 20:1–7, 2019.

Indices and tables