Deploying convolutional neural networks (CNNs) on embedded devices is difficult due to the limited memory and computation resources. The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been investigated in neural architecture design. This paper proposes a novel Ghost module to generate more feature maps from cheap operations. Based on a set of intrinsic feature maps, we apply a series of linear transformations with cheap cost to generate many ghost feature maps that could fully reveal information underlying intrinsic features.
2019: Kai Han, Yunhe Wang, Qi Tian, Jianyuan Guo, Chunjing Xu, Chang Xu
https://arxiv.org/pdf/1911.11907v2.pdf
Evolutionary Optimization of Model Merging Recipes
EasyJailbreak: A Unified Framework for Jailbreaking Large Language Models
BGE M3-Embedding: Multi-Lingual, Multi-Functionality, Multi-Granularity Text Embeddings Through Self-Knowledge Distillation
Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts
A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models
Chronos: Learning the Language of Time Series
Linear Transformers with Learnable Kernel Functions are Better In-Context Models
SplattingAvatar: Realistic Real-Time Human Avatars with Mesh-Embedded Gaussian Splatting
Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents
GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection
TripoSR: Fast 3D Object Reconstruction from a Single Image
Diffusion Model-Based Image Editing: A Survey
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation
Intent-based Prompt Calibration: Enhancing prompt optimization with synthetic boundary cases
Sora: A Review on Background, Technology, Limitations, and Opportunities of Large Vision Models
BitDelta: Your Fine-Tune May Only Be Worth One Bit
Ring Attention with Blockwise Transformers for Near-Infinite Context
Premise Order Matters in Reasoning with Large Language Models
Generative Representational Instruction Tuning
Join Podbean Ads Marketplace and connect with engaged listeners.
Advertise Today
Create your
podcast in
minutes
It is Free
The 404 Media Podcast
Cyber Security Headlines
Risky Business
The WAN Show
Click Here