Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features.
2019: Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu
https://arxiv.org/pdf/1911.11907v2.pdf
Bridging the Gap between Object and Image-level Representations for Open-Vocabulary Detection
Transformer in Transformer
Vision GNN: An Image is Worth Graph of Nodes
TorchGeo: deep learning with geospatial data
OmniXAI: A Library for Explainable AI
Demystifying MMD GANs
Evaluating Large Language Models Trained on Code
SNUG: Self-Supervised Neural Dynamic Garments
GPT-NeoX-20B: An Open-Source Autoregressive Language Model
BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird’s-Eye View Representation
There is No Data Like More Data - Current Status of Machine Learning Datasets in Remote Sensing
Hopular: Modern Hopfield Networks for Tabular Data
Pretraining is All You Need for Image-to-Image Translation
Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
Recipe for a General, Powerful, Scalable Graph Transformer
Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training
Symphony Generation with Permutation Invariant Language Model
Towards An End-to-End Framework for Flow-Guided Video Inpainting
Ivy: Templated Deep Learning for Inter-Framework Portability
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
The 404 Media Podcast
The WAN Show
Cyber Security Headlines
Daily Charge Up
Cybersecurity Today