Sort by
Keyphrases
Feature Compression
100%
Inference System
100%
Device-Edge Co-Inference
100%
Mobile Devices
75%
Encoder
50%
Decoder
50%
Intermediate Features
50%
Neural Network
25%
Resource Consumption
25%
Classification Accuracy
25%
Trainable
25%
Sparsity
25%
Communication Overhead
25%
Compression Ratio
25%
Fault-tolerant
25%
High Compression Ratio
25%
Bandwidth Enhancement
25%
Promising Solutions
25%
Resource-constrained
25%
Additive White Gaussian Noise Channel
25%
Channel Noise
25%
Intermediate Data
25%
Efficient Features
25%
Deep Neural Network
25%
Binary Erasure Channel
25%
Deep Learning Model
25%
Edge Computing
25%
Intelligent Mobile Applications
25%
Computing Servers
25%
Feature Transmission
25%
Online Computation
25%
Channel Layer
25%
Splitting Point
25%
Model Splitting
25%
End-to-end Architecture
25%
Lightweight Neural Network
25%
Application Demand
25%
Inference Framework
25%
Joint Source-channel Coding
25%
Bit Compression
25%
Computer Science
Mobile Device
100%
Inference System
100%
Compression Ratio
66%
Deep Learning Model
33%
Fault Tolerant
33%
Deep Neural Network
33%
Convolutional Neural Network
33%
Sparsity
33%
Communication Overhead
33%
Additive White Gaussian Noise
33%
Intermediate Data
33%
Edge Computing
33%
joint source channel coding
33%
Neural Network
33%
Mobile Application
33%
Resource Consumption
33%