Sustainable and Interpretable Neural Networks Group
We are a research group at the Jagiellonian University in Kraków. Our work is focused on the development of interpretable-by-design algorithms in the field of computer vision. We emphasize the importance of the approaches based on cognitive theories and pay special attention to their intuitiveness and sustainability. We aim to provide trustworthy methods that can be easily adopted in the industry.
Achievements
LucidPPN: Unambiguous Prototypical Parts Network for User-centric Interpretable Computer Vision
Token Recycling for Efficient Sequential Inference with Vision Transformers
Jan Olszewski, Dawid Rymarczyk, Piotr Wójcik, Mateusz Pach, Bartosz Zieliński
Revisiting FunnyBirds evaluation framework for prototypical parts networks (paper)
Interpretability Benchmark for Evaluating Spatial Misalignment of Prototypical Parts Explanations (paper)
ICICLE: Interpretable Class Incremental Continual Learning (paper)
OPUS (grant)
ProtoSeg: Interpretable Semantic Segmentation with Prototypical Parts (paper)
PRELUDIUM 21 (grant)
Interpretable image classification with differentiable prototypes assignment (paper)
ProtoMIL: Multiple Instance Learning with Prototypical Parts for Whole-Slide Image Classification (paper)
ProtoPShare: Prototypical Parts Sharing for Similarity Discovery in Interpretable Image Classification (paper)
Posts
Algorytmy w codziennym życiu: zrozumieć czy zaufać?
Wpływ algorytmów na codzienne decyzje Informacje, na których opieramy się w swoich codziennych wyborach zawodowych, ekonomicznych a nawet światopoglądowych coraz częściej trafiają do nas dzięki wszechobecnym już algorytmom. Chociaż korzystamy z wyselekcjonowanych...