Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features.
2019: Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu
https://arxiv.org/pdf/1911.11907v2.pdf
CFPNet: Channel-wise Feature Pyramid for Real-Time Semantic Segmentation
OpenPrompt: An Open-source Framework for Prompt-learning
Paint Transformer: Feed Forward Neural Painting with Stroke Prediction
Finetuned Language Models Are Zero-Shot Learners
AugMax: Adversarial Composition of Random Augmentations for Robust Training
NeRV: Neural Representations for Videos
CoAtNet: Marrying Convolution and Attention for All Data Sizes
Self-Supervised Learning by Estimating Twin Class Distributions
Parameter Prediction for Unseen Deep Architectures
Chunked Autoregressive GAN for Conditional Waveform Synthesis
ByteTrack: Multi-Object Tracking by Associating Every Detection Box
Non-deep Networks
Weak Novel Categories without Tears: A Survey on Weak-Shot Learning
Alias-Free Generative Adversarial Networks
Finetuned Language Models Are Zero-Shot Learners
Keypoint Communities
Residual Attention: A Simple but Effective Method for Multi-Label Recognition
2nd Place Solution to Google Landmark Retrieval 2021
Mesh Graphormer
Lenia and Expanded Universe
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
Babbage from The Economist
Cyber Security Headlines
Software Engineering Daily
Cybersecurity Today
AI Deep Dive