Spacc BBS Spacc BBS
    • Categorie
    • Recenti
    • Tag
    • Popolare
    • Mondo
    • Utenti
    • Gruppi
    • Registrati
    • Accedi
    La nuova BBS è in fase Alpha. I post precedenti al 22 luglio 2024 potrebbero non essere trasferibili, ma rimarranno disponibili per la lettura su /old/.

    Got an interesting question today about #Fedify's outgoing #queue design!

    Pianificato Fissato Bloccato Spostato Uncategorized
    queuefedifyfedidevfediverseactivitypub
    10 Post 5 Autori 0 Visualizzazioni
    Caricamento altri post
    • Da Vecchi a Nuovi
    • Da Nuovi a Vecchi
    • Più Voti
    Rispondi
    • Topic risposta
    Effettua l'accesso per rispondere
    Questa discussione è stata eliminata. Solo gli utenti con diritti di gestione possono vederla.
    • fedify@hollo.socialF Questo utente è esterno a questo forum
      fedify@hollo.social
      ultima modifica di

      Got an interesting question today about #Fedify's outgoing #queue design!

      Some users noticed we create separate queue messages for each recipient inbox rather than queuing a single message and handling the splitting later. There's a good reason for this approach.

      In the #fediverse, server response times vary dramatically—some respond quickly, others slowly, and some might be temporarily down. If we processed deliveries in a single task, the entire batch would be held up by the slowest server in the group.

      By creating individual queue items for each recipient:

      • Fast servers get messages delivered promptly
      • Slow servers don't delay delivery to others
      • Failed deliveries can be retried independently
      • Your UI remains responsive while deliveries happen in the background

      It's a classic trade-off: we generate more queue messages, but gain better resilience and user experience in return.

      This is particularly important in federated networks where server behavior is unpredictable and outside our control. We'd rather optimize for making sure your posts reach their destinations as quickly as possible!

      What other aspects of Fedify's design would you like to hear about? Let us know!

      #ActivityPub #fedidev

      1 Risposta Ultima Risposta Rispondi Cita 0
      • julian@community.nodebb.orgJ Questo utente è esterno a questo forum
        julian@community.nodebb.org
        ultima modifica di

        @fedify@hollo.social that's interesting! I didn't even consider that, but it makes a lot of sense.

        1 Risposta Ultima Risposta Rispondi Cita 0
        • julian@fietkau.socialJ Questo utente è esterno a questo forum
          julian@fietkau.social
          ultima modifica di

          @fedify Thanks for the explanation. 🙂 I have a related question that I've been meaning to ask:

          If I give Fedify's sendActivity a list of recipients, does it do any deduplication or are activities sent twice if the same remote actor appears twice in my recipient list?

          For example, a common addressing case is followers + tagged, and I've been wondering if I should check these for overlap myself or if I can leave it to Fedify. I couldn't find anything on this in the documentation.

          fedify@hollo.socialF 1 Risposta Ultima Risposta Rispondi Cita 0
          • fedify@hollo.socialF Questo utente è esterno a questo forum
            fedify@hollo.social @julian@fietkau.social
            ultima modifica di

            @julian@fietkau.social They are no deduplicated, but no worries! Even if the same activity is sent to the same recipient more than once, only the first one is received and rest of them are ignored.

            julian@fietkau.socialJ 1 Risposta Ultima Risposta Rispondi Cita 0
            • julian@fietkau.socialJ Questo utente è esterno a questo forum
              julian@fietkau.social @fedify@hollo.social
              ultima modifica di

              @fedify Thank you! I was a bit concerned with network traffic as well, although I guess in the grand scheme it's not too much. Although maybe I'll put in a duplicate filter in my code after all. 🤔

              1 Risposta Ultima Risposta Rispondi Cita 0
              • possiblymax@hachyderm.ioP Questo utente è esterno a questo forum
                possiblymax@hachyderm.io
                ultima modifica di

                @fedify How many queues do you use? Is it based on any mathematical rules like number of users vs cpu cores, or memory requirements? Do you always spin up a new queue or cap the number and reuse the resources as they come available?

                fedify@hollo.socialF 1 Risposta Ultima Risposta Rispondi Cita 0
                • fedify@hollo.socialF Questo utente è esterno a questo forum
                  fedify@hollo.social @possiblymax@hachyderm.io
                  ultima modifica di

                  @PossiblyMax@hachyderm.io Great question about our queue implementation! Fedify doesn't actually create separate physical queues, but rather uses a single logical queue where each message contains its own destination information.

                  For resource management, we generally rely on the underlying queue implementation (Redis, PostgreSQL, etc.) to handle concurrent processing efficiently. Since version 1.0.0, we've introduced ParallelMessageQueue which processes multiple messages concurrently with a configurable worker count—usually set close to your CPU core count for IO-bound operations.

                  We don't spin up new queues dynamically; instead, we focus on making the message processing scalable. You can control the parallelism level when using ParallelMessageQueue, and for high-volume instances, you can horizontally scale by running multiple worker processes that connect to the same shared queue backend.

                  This approach keeps the architecture simpler while still allowing for good throughput and resource utilization that can scale with your instance size.

                  1 Risposta Ultima Risposta Rispondi Cita 0
                  • fedify@hollo.socialF Questo utente è esterno a questo forum
                    fedify@hollo.social
                    ultima modifica di

                    Coming soon in #Fedify 1.5.0: Smart fan-out for efficient activity delivery!

                    After getting feedback about our queue design, we're excited to introduce a significant improvement for accounts with large follower counts.

                    As we discussed in our previous post, Fedify currently creates separate queue messages for each recipient. While this approach offers excellent reliability and individual retry capabilities, it causes performance issues when sending activities to thousands of followers.

                    Our solution? A new two-stage “fan-out” approach:

                    1. When you call Context.sendActivity(), we'll now enqueue just one consolidated message containing your activity payload and recipient list
                    2. A background worker then processes this message and re-enqueues individual delivery tasks

                    The benefits are substantial:

                    • Context.sendActivity() returns almost instantly, even for massive follower counts
                    • Memory usage is dramatically reduced by avoiding payload duplication
                    • UI responsiveness improves since web requests complete quickly
                    • The same reliability for individual deliveries is maintained

                    For developers with specific needs, we're adding a fanout option with three settings:

                    • "auto" (default): Uses fanout for large recipient lists, direct delivery for small ones
                    • "skip": Bypasses fanout when you need different payload per recipient
                    • "force": Always uses fanout even with few recipients
                    // Example with custom fanout setting
                    await ctx.sendActivity(
                      { identifier: "alice" },
                      recipients,
                      activity,
                      { fanout: "skip" }  // Directly enqueues individual messages
                    );
                    

                    This change represents months of performance testing and should make Fedify work beautifully even for extremely popular accounts!

                    For more details, check out our docs.

                    What other #performance optimizations would you like to see in future Fedify releases?

                    #ActivityPub #fedidev

                    silverpill@mitra.socialS 1 Risposta Ultima Risposta Rispondi Cita 0
                    • silverpill@mitra.socialS Questo utente è esterno a questo forum
                      silverpill@mitra.social @fedify@hollo.social
                      ultima modifica di

                      @fedify Why there was a bottleneck at "create messages for each recipient" step?

                      My implementation works like your fan-out mechanism, but I am not completely satisfied with it, and I am thinking about switching to single-recipient queues.

                      julian@community.nodebb.orgJ 1 Risposta Ultima Risposta Rispondi Cita 0
                      • julian@community.nodebb.orgJ Questo utente è esterno a questo forum
                        julian@community.nodebb.org @silverpill@mitra.social
                        ultima modifica di

                        @silverpill@mitra.social it sounds like @fedify@hollo.social was generating one activity for each recipient, which if combined with relatively heavy database calls, would scale quite poorly, even when processed asynchronously.

                        Batching those db calls is always a low hanging fruit in terms of code optimization.

                        1 Risposta Ultima Risposta Rispondi Cita 0
                        • Primo post
                          Ultimo post