Bei Ouyang (欧阳蓓)
  • Bio
  • Papers
  • Talks
  • News
  • Experience
  • Teaching
  • Blog
  • Blog
    • 🎉 Our paper “Resource-Efficient Collaborative Edge Transformer Inference with Hybrid Model Parallelism” get accepted at TMC!
    • 🎉 Our paper “Revisiting Location Privacy in MEC-Enabled Computation Offloading” get accepted at TIFS!
    • 🎉 Our paper “Jupiter” get accepted at Infocom’25!
    • 🎉 Our paper “PAC” get accepted at ICPP’24!
  • Publications
    • Resource-Efficient Collaborative Edge Transformer Inference with Hybrid Model Parallelism
    • Revisiting Location Privacy in MEC-Enabled Computation Offloading
    • Jupiter: Fast and Resource-Efficient Collaborative Inference of Generative LLMs on Edge Devices
    • Pluto and Charon: A Time and Memory Efficient Collaborative Edge AI Framework for Personal LLMs Fine-Tuning
  • Recent & Upcoming Talks
    • Oral Report at ICPP'24
  • Projects
  • Teaching
    • Digital Circuits and Logical Design
  • Projects
    • Pandas
    • PyTorch
    • scikit-learn
  • Experience
  • Blog

Resource-Efficient Collaborative Edge Transformer Inference with Hybrid Model Parallelism

May 25, 2025·
Shengyuan Ye
,
Bei Ouyang
,
Jiangsu Du
,
Liekang Zeng
,
Tianyi Qian
,
Wenzhong Ou
,
Xiaowen Chu
,
Deke Guo
,
Yutong Lu
,
Xu Chen
· 0 min read
PDF DOI
Type
Journal-Paper
Publication
In IEEE Transactions on Mobile Computing
Last updated on May 25, 2025
Large Language Models Edge Intelligence

Revisiting Location Privacy in MEC-Enabled Computation Offloading Apr 5, 2025 →

© 2025 Me. This work is licensed under CC BY NC ND 4.0

Published with Hugo Blox Builder — the free, open source website builder that empowers creators.