Enabling Flexible Resource Allocation in Mobile Deep Learning Systems
Abstract: Deep learning provides new opportunities for mobile applications to achieve higher performance than before. Rather, the deep learning implementation on mobile device today is largely demanding on expensive resource overheads, imposes a significant burden on the battery life and limited memory space. Existing methods either utilize cloud or edge infrastructure that require to upload user data, however, resulting in a risk of privacy leakage and large data transfers; or adopt compressed deep models, nevertheless, downgrading the algorithm accuracy. This paper provides Deep Shark, a platform to enable mobile devices with the ability of flexible resource allocation in using commercial-off-the-shelf (COTS) deep learning systems. Compared to existing approaches, Deep Shark seeks a balanced point between time and memory efficiency by user requirements, breaks down sophisticated deep model into code block stream and incrementally executes such blocks on system-on-chip (SoC). Thus, Deep Shark requires significantly less memory space on mobile device and achieves the default accuracy. In addition, all referred user data of model processing is handled locally, thus to avoid unnecessary data transfer and network latency. Deep Shark is now developed on two COTS deep learning systems, i.e., Caffe and Tensor Flow. The