Please note: Importing new articles from Word documents is currently unavailable. We are working on fixing this issue soon and apologize for any inconvenience.

loading page

An Interpretable Deep Learning Model for EEG Signals
  • Amirhessam Tahmassebi,
  • Anke Meyer-Baese,
  • Amir Gandomi
Amirhessam Tahmassebi
Florida State University

Corresponding Author:[email protected]

Author Profile
Anke Meyer-Baese
Florida State University
Author Profile
Amir Gandomi
University of Technology Sydney
Author Profile

Abstract

Cutting-edge methods in artificial intelligence (AI) have the ability to significantly improve outcomes. However, the struggle to interpret these black box models presents a serious problem to the industry. When selecting a model, the decision to sacrifice accuracy for interpretability must be made. In this paper, we consider a case study on eye state detection using electroencephalogram (EEG) signals to investigate how a deep neural network (DNN) model makes a prediction, and how that prediction can be interpreted.