Talk:Federated learning

Untitled (I)

edit

Limwyb (talk) 06:36, 7 January 2020 (UTC) The paper "Federated Learning in mobile edge networks: A comprehensive survey" is a useful resource that discusses the technical issues of FL and can be considered as an additional reference. https://arxiv.org/abs/1909.11875Reply

(I'm not a regular Wikipedia contributor so I'm not sure if my comment is in the correct place). The image has two spelling glitches and a grammar glitch. In "Step 4" the text reads, "Central server pools model results and generate on global mode without accessing any data" -- should read "Central server pools model results and generates one aggregated global model without accessing any data" — Preceding unsigned comment added by 2601:600:8180:4460:9926:63FE:E74D:2658 (talk) 00:10, 8 February 2020 (UTC)Reply

Untitled (II)

edit

Francesco Linsalata It is a novel topic and really interesting to read. I have corrected only minor typos.

MicheleAzzone (talk) 13:11, 26 June 2020 (UTC) Hi the paragraph Federated learning parameters is missing a reference. The first paragraph in Use cases and the last one in Personalization miss a reference.Reply

If possible add pages to your references in order to make them easily verifiable.

--Luca Barbieri94 (talk) 14:17, 26 June 2020 (UTC) Thank you MicheleAzzone for your suggestions. I will try to find the references for verifying the paragraphs. I'm not sure if i will be able to add the one for the Federated learning parameters as I did not modify that paragraph.Reply

--Luca Barbieri94 (talk) 15:23, 26 June 2020 (UTC) I have managed to find some references for verifying the first paragraph in Use cases and the last one in Personalization. Unfortunately, I could not find any reference for the Federated learning parameters paragraph.Reply

Observations and suggestions for improvements

edit

The following observations and suggestions for improvements were collected, following expert review of the article within the Science, Tecnology, Society and Wikipedia course at the Politecnico di Milano, in June 2020.

decentralized edge devices or servers -> decentralized processing nodes (servers or even edge devices)

uploaded to one server -> are stored on a single server

Probably, it should be useful to mention that distributed processing is possible even without federations, but there are no problems in terms of data privacy and communication.

The concept is very well explained in the ""Definition"" section. I suggest to move (or replicate) the text above.

single point failures -> a single point of failure

Iterative learning section -> I would not mention client-server in the first paragraph, as decentralized approaches are possible: I would more generically refer to ""iterative communication steps between processing nodes"".

performances -> performance

store examples -> store samples

the technology also avoid -> avoids

""In this section, the exposition of the paper published by H. Brendan McMahan and al. in 2017 is followed.[11]""-> This section follows the exposition in ...

Personalization section -> first sentence is difficult to read

Majorett (talk) 17:22, 18 July 2020 (UTC)Reply

Additional personalization methods

edit

In the subsection on model personalization, only multi-task learning and clustering are mentioned. Other personalization methods for Federated Learning include:[1]

  • Featurization: Include context information about each node as a feature in the machine learning model.
  • Local fine-tuning: This method consists of training one global model through FL, broadcasting it, and subsequently letting each node personalize the model through additional training on their local data set.
  • Learning-to-Learn: Learn a learning algorithm by sampling from a meta-distribution of machine learning tasks (aka Meta-Learning).
  • Model-agnostic Meta-Learning: Optimize a global model specifically as a good starting point for local fine-tuning.

References

  1. ^ Kairouz, Peter; McMahan, H. Brendan; et al. (10 December 2019). "Advances and Open Problems in Federated Learning". arXiv preprint. Retrieved 20 November 2020. {{cite journal}}: Explicit use of et al. in: |last3= (help) The model personalization methods mentioned on this talk page are described in Section 3.3 of this reference.

Unused notations

edit

This is now in this article:

To describe the federated strategies, let us introduce some notations:

  •   : total number of clients;
  •   : index of clients;
  •  : number of data samples available during training for client  ;
  •  : model's weight vector on client  , at the federated round  ;
  •   : loss function for weights   and batch  ;
  •   : number of local epochs;

Why are these introduced when they are not used subsequently in the article? Michael Hardy (talk) 16:37, 8 February 2021 (UTC)Reply

I have the same question, I think I'll just remove them. Niplav (talk) 15:51, 24 July 2024 (UTC)Reply
@Jeromemetronome I hope that's not a big issue, but this notation's been unused for >3 years now. Niplav (talk) 16:05, 24 July 2024 (UTC)Reply

Section on variations is maybe too detailed

edit

I think it'd be worth it to tackle this section and straighten it out. Niplav (talk) 16:08, 24 July 2024 (UTC)Reply