FOGML: The Second International IEEE INFOCOM Workshop on Distributed Machine Learning and Fog Networks
Call for Papers
Fog networking is emerging as an end-to-end architecture that aims to distribute computing, storage, control, and networking functions along the cloud-to-things continuum of nodes that exists between datacenters and end users. Fueled by the volumes of data generated by network devices, machine learning has attracted significant attention in fog computing systems, both for providing intelligent applications to end users and for optimizing the operation of wireless and wireline networks. Existing methodologies for distributing machine learning across a set of devices have typically been envisioned for scenarios where device communication and computation properties are homogeneous, and/or where devices are directly connected to an aggregation server. These assumptions often do not hold in contemporary fog network systems, however. This motivates a new paradigm of fog learning to distribute model training over networks in a network-aware manner, i.e., considering the structure of the topology among devices, the heterogeneity of node communication and computation capabilities, and the proximity of resource-limited to resource-abundant nodes to optimize training. It also motivates the development of novel machine learning techniques to optimize the operation of fog network systems, which must consider the short timescale variability in network state due to device mobility.
The International IEEE Workshop on Distributed Machine Learning and Fog Networks (FOGML) aims to bring together researchers, developers, and practitioners from academia and industry to innovate at the intersection of distributed machine learning and fog computing. This includes research efforts in developing machine learning methodologies both "for" and "over" networks along the cloud-to-things continuum. Specifically, we solicit research papers from areas including, but not limited to:
- Congestion control and traffic management for distributed machine learning
- Efficient neural network training and design conducted on fog networks
- Task scheduling and resource allocation for distributed machine learning
- User device incentive mechanism design for distributed learning
- Impact of network topology on training of machine learning models
- Uplink/downlink communications modeling and optimization in federated learning
- Hierarchical and multi-layer federated learning for fog networks
- Communication-efficient distributed learning with quantization, sparsification, and model distillation
- Asynchronous and event-triggered distributed model training over fog networks
- Reinforcement learning for signal design, beamforming, channel state
- estimation, and interference mitigation in wireless networks
- Distributed machine learning for optimizing massive MIMO, mmWave, intelligent
- reflective surfaces, and other contemporary communication technologies
- Hybrid terrestrial and non-terrestrial architectures for fog learning
- Straggler mitigation over generalized hierarchical network structures
- Collaborative/cooperative model learning over device-to-device communication structures
- New convergence bounds relating model, learning, and network parameters in distributed training
- Privacy and security considerations in distributed learning over fog network systems
- Network protocol design and optimization for distributed learning
- Intelligent device sampling methods for optimizing federated learning over heterogeneous networks
- Testbeds and experimental results on distributed learning and fog networks
- Green distributed learning algorithm design
Paper Submission Link
Important Dates
Submission Deadline: December 20, 2022
Notification of Acceptance: February 6, 2023
Camera Ready Deadline: March 6, 2023
Workshop: May 20, 2023