This blog post introduces an explainable Convolutional Neural Network (CNN) model I developed for the Fashion MNIST dataset. The work was initially presented at the ICCKE 2022 conference.
Overview of the CNN Model
CNNs have become the go-to models for image classification tasks due to their ability to automatically and adaptively learn spatial hierarchies of features through backpropagation. The model I developed is designed not only for accuracy but also for interpretability—providing insights into why the model makes certain predictions.
I used Fashion MNIST dataset. The Fashion MNIST dataset is a popular alternative to the traditional MNIST dataset, featuring 70,000 grayscale images of 10 different categories of clothing items, such as shirts, sneakers, and dresses. The challenge lies in correctly classifying these items into their respective categories.
Explainability
Explainability in AI is about making the inner workings of machine learning models transparent. This is especially important in fields where trust and accountability are paramount. By incorporating explainability techniques, the model provides a clearer understanding of how it reaches its conclusions.
I explain the concept of Explainable AI in this article.
Access the Code
The code for this model, along with detailed documentation, is available on GitHub. It includes not only the implementation of the CNN but also the methods for visualizing and explaining the results.
Presentation at ICCKE 2022
This project was presented at the International Conference on Computer and Knowledge Engineering (ICCKE) in 2022. The conference provided an excellent platform to share and discuss cutting-edge research with peers from around the world.
If you are interested in a more in-depth explanation, including the technical aspects of the model, you can watch the video of my presentation, conducted in Persian: