Keyphrases
Deep Neural Network
100%
End Device
100%
Deep Inference
100%
Time Cooperation
100%
Model Partition
66%
Deep Model
66%
Inference Latency
44%
Execution Latency
33%
Optimal Partition
22%
Min-cut
22%
Directed Acyclic Graph
22%
Latency Measurement
22%
Cut Vertex
22%
Measurement Method
11%
Response Time
11%
Object Recognition
11%
Superior Performance
11%
Network Partitioning
11%
Resource Constraints
11%
Two-stage Approach
11%
Video Data
11%
Increased Throughput
11%
Heavy Weight
11%
Cutting Problem
11%
Network Dynamics
11%
Internet of Things
11%
Intelligent Applications
11%
Self-driving Cars
11%
Automatic Driving
11%
Deep Learning Framework
11%
Inference Tasks
11%
Mobile Cloud
11%
Device Edge
11%
Real Hardware
11%
Computer Science
Deep Neural Network
100%
Partition Model
66%
Neural Network Model
55%
Optimal Partition
22%
Directed Acyclic Graph
22%
Object Recognition
11%
Deep Learning Method
11%
Response Time
11%
Learning Framework
11%
Superior Performance
11%
Computer Hardware
11%
Network Partition
11%
Subgraphs
11%
Inference Task
11%
Resource Constraint
11%
Network Dynamic
11%
Measurement Method
11%
Experimental Result
11%
Internet-Of-Things
11%