Published on Tue Jul 13 2021

Designing Interpretable Convolution-Based Hybrid Networks for Genomics

Ghotra, R. S., Lee, N. K., Tripathy, R., Koo, P. K.

Convolutional layers with attention mechanisms have demonstrated improved performance relative to pure convolutional networks. Their inductive bias to learn long-range interactions provides an avenue to identify learned motif-motif interactions.

3
3
7
Abstract

Hybrid networks that build upon convolutional layers with attention mechanisms have demonstrated improved performance relative to pure convolutional networks across many regulatory genome analysis tasks. Their inductive bias to learn long-range interactions provides an avenue to identify learned motif-motif interactions. For attention maps to be interpretable, the convolutional layer(s) must learn identifiable motifs. Here we systematically investigate the extent that architectural choices in convolution-based hybrid networks influence learned motif representations in first layer filters, as well as the reliability of their attribution maps generated by saliency analysis. We find that design principles previously identified in standard convolutional networks also generalize to hybrid networks. This work provides an avenue to narrow the spectrum of architectural choices when designing hybrid networks such that they are amenable to commonly used interpretability methods in genomics.