Compare commits

20 Commits

Author SHA1 Message Date
mr
ef3d998ead demo test + Peer 2026-03-03 16:38:24 +01:00
mr
79aa3cc2b3 adjust 2026-02-26 09:14:34 +01:00
mr
779e36aaef Pass + Doc 2026-02-24 14:31:37 +01:00
mr
572da29fd4 check up if peer is sourced. 2026-02-20 15:01:01 +01:00
mr
3eae5791a1 Native Indexer Mode 2026-02-20 12:42:18 +01:00
mr
88fd05066c update lightest peer and nats behaviors 2026-02-18 14:32:44 +01:00
mr
0250c3b339 Peer Discovery -> DHT // no more pubsub state 2026-02-18 13:29:50 +01:00
mr
6a5ffb9a92 Indexer Quality Score TrustLess 2026-02-17 13:11:22 +01:00
mr
fa914958b6 Keep Peer Caching + Resource Verification. 2026-02-09 13:28:00 +01:00
mr
1c0b2b4312 better tagging 2026-02-09 09:45:41 +01:00
mr
631e2846fe remove apk 2026-02-09 08:55:50 +01:00
mr
d985d8339a Change of state Conn Management 2026-02-05 16:17:33 +01:00
mr
ea14ad3933 Closure On change of state 2026-02-05 16:17:14 +01:00
mr
2e31df89c2 oc-discovery + auto create peer 2026-02-05 15:47:29 +01:00
mr
425cbdfe7d stream address 2026-02-05 15:36:22 +01:00
mr
8ee5b84e21 publish-registry 2026-02-05 12:14:02 +01:00
mr
552bb17e2b Connectivity ok 2026-02-05 11:23:11 +01:00
mr
88e29073a2 dockerfile default 2026-02-05 09:31:51 +01:00
mr
b429ee9816 base demo files 2026-02-05 08:57:00 +01:00
mr
c716225283 base demo 2026-02-05 08:56:55 +01:00
89 changed files with 5549 additions and 1050 deletions

495
ARCHITECTURE.md Normal file
View File

@@ -0,0 +1,495 @@
# oc-discovery — Architecture et analyse technique
> **Convention de lecture**
> Les points marqués ✅ ont été corrigés dans le code. Les points marqués ⚠️ restent ouverts.
## Table des matières
1. [Vue d'ensemble](#1-vue-densemble)
2. [Hiérarchie des rôles](#2-hiérarchie-des-rôles)
3. [Mécanismes principaux](#3-mécanismes-principaux)
- 3.1 Heartbeat long-lived (node → indexer)
- 3.2 Scoring de confiance
- 3.3 Enregistrement auprès des natifs (indexer → native)
- 3.4 Pool d'indexeurs : fetch + consensus
- 3.5 Self-delegation et offload loop
- 3.6 Résilience du mesh natif
- 3.7 DHT partagée
- 3.8 PubSub gossip (indexer registry)
- 3.9 Streams applicatifs (node ↔ node)
4. [Tableau récapitulatif](#4-tableau-récapitulatif)
5. [Risques et limites globaux](#5-risques-et-limites-globaux)
6. [Pistes d'amélioration](#6-pistes-damélioration)
---
## 1. Vue d'ensemble
`oc-discovery` est un service de découverte P2P pour le réseau OpenCloud. Il repose sur
**libp2p** (transport TCP + PSK réseau privé) et une **DHT Kademlia** (préfixe `oc`)
pour indexer les pairs. L'architecture est intentionnellement hiérarchique : des _natifs_
stables servent de hubs autoritaires auxquels des _indexeurs_ s'enregistrent, et des _nœuds_
ordinaires découvrent des indexeurs via ces natifs.
```
┌──────────────┐ heartbeat ┌──────────────────┐
│ Node │ ───────────────────► │ Indexer │
│ (libp2p) │ ◄─────────────────── │ (DHT server) │
└──────────────┘ stream applicatif └────────┬─────────┘
│ subscribe / heartbeat
┌──────────────────┐
│ Native Indexer │◄──► autres natifs
│ (hub autoritaire│ (mesh)
└──────────────────┘
```
Tous les participants partagent une **clé pré-partagée (PSK)** qui isole le réseau
des connexions libp2p externes non autorisées.
---
## 2. Hiérarchie des rôles
| Rôle | Binaire | Responsabilité |
|---|---|---|
| **Node** | `node_mode=node` | Se fait indexer, publie/consulte des records DHT |
| **Indexer** | `node_mode=indexer` | Reçoit les heartbeats, écrit en DHT, s'enregistre auprès des natifs |
| **Native Indexer** | `node_mode=native` | Hub : tient le registre des indexeurs vivants, évalue le consensus, sert de fallback |
Un même processus peut cumuler les rôles node+indexer ou indexer+native.
---
## 3. Mécanismes principaux
### 3.1 Heartbeat long-lived (node → indexer)
**Fonctionnement**
Un stream libp2p **persistant** (`/opencloud/heartbeat/1.0`) est ouvert depuis le nœud
vers chaque indexeur de son pool (`StaticIndexers`). Toutes les 20 secondes, le nœud
envoie un `Heartbeat` JSON sur ce stream. L'indexeur répond en enregistrant le peer dans
`StreamRecords[ProtocolHeartbeat]` avec une expiry de 2 min.
Si `sendHeartbeat` échoue (stream reset, EOF, timeout), le peer est retiré de
`StaticIndexers` et `replenishIndexersFromNative` est déclenché.
**Avantages**
- Détection rapide de déconnexion (erreur sur le prochain encode).
- Un seul stream par pair réduit la pression sur les connexions TCP.
- Le channel de nudge (`indexerHeartbeatNudge`) permet un reconnect immédiat sans
attendre le ticker de 20 s.
**Limites / risques**
- ⚠️ Un seul stream persistant : si la couche TCP reste ouverte mais "gelée" (middlebox,
NAT silencieux), l'erreur peut ne pas remonter avant plusieurs minutes.
- ⚠️ `StaticIndexers` est une map partagée globale : si deux goroutines appellent
`replenishIndexersFromNative` simultanément (cas de perte multiple), on peut avoir
des écritures concurrentes non protégées hors des sections critiques.
---
### 3.2 Scoring de confiance
**Fonctionnement**
Avant d'enregistrer un heartbeat dans `StreamRecords`, l'indexeur vérifie un **score
minimum** calculé par `CheckHeartbeat` :
```
Score = (0.4 × uptime_ratio + 0.4 × bpms + 0.2 × diversity) × 100
```
- `uptime_ratio` : durée de présence du peer / durée depuis le démarrage de l'indexeur.
- `bpms` : débit mesuré via un stream dédié (`/opencloud/probe/1.0`) normalisé par 50 Mbps.
- `diversity` : ratio d'IP /24 distincts parmi les indexeurs que le peer déclare.
Deux seuils sont appliqués selon l'état du peer :
- **Premier heartbeat** (peer absent de `StreamRecords`, uptime = 0) : seuil à **40**.
- **Heartbeats suivants** (uptime accumulé) : seuil à **75**.
**Avantages**
- Décourage les peers éphémères ou lents d'encombrer le registre.
- La diversité réseau réduit le risque de concentration sur un seul sous-réseau.
- Le stream de probe dédié évite de polluer le stream JSON heartbeat avec des données binaires.
- Le double seuil permet aux nouveaux peers d'être admis dès leur première connexion.
**Limites / risques**
-**Deadlock logique de démarrage corrigé** : avec uptime = 0 le score maximal était 60,
en-dessous du seuil de 75. Les nouveaux peers étaient silencieusement rejetés à jamais.
→ Seuil abaissé à **40** pour le premier heartbeat (`isFirstHeartbeat`), 75 ensuite.
- ⚠️ Les seuils (40 / 75) restent câblés en dur, sans possibilité de configuration.
- ⚠️ La mesure de bande passante envoie entre 512 et 2048 octets par heartbeat : à 20 s
d'intervalle et 500 nœuds max, cela représente ~50 KB/s de trafic probe en continu.
- ⚠️ `diversity` est calculé sur les adresses que le nœud *déclare* avoir — ce champ est
auto-rapporté et non vérifié, facilement falsifiable.
---
### 3.3 Enregistrement auprès des natifs (indexer → native)
**Fonctionnement**
Chaque indexeur (non-natif) envoie périodiquement (toutes les 60 s) une
`IndexerRegistration` JSON sur un stream one-shot (`/opencloud/native/subscribe/1.0`)
vers chaque natif configuré. Le natif :
1. Stocke l'entrée en cache local avec un TTL de **90 s** (`IndexerTTL`).
2. Gossipe le `PeerID` sur le topic PubSub `oc-indexer-registry` aux autres natifs.
3. Persiste l'entrée en DHT de manière asynchrone (retry jusqu'à succès).
**Avantages**
- Stream jetable : pas de ressource longue durée côté natif pour les enregistrements.
- Le cache local est immédiatement disponible pour `handleNativeGetIndexers` sans
attendre la DHT.
- La dissémination PubSub permet à d'autres natifs de connaître l'indexeur sans
qu'il ait besoin de s'y enregistrer directement.
**Limites / risques**
-**TTL trop serré corrigé** : le TTL de 66 s n'était que 10 % au-dessus de l'intervalle
de 60 s — un léger retard réseau pouvait expirer un indexeur sain entre deux renewals.
`IndexerTTL` porté à **90 s** (+50 %).
- ⚠️ Si le `PutValue` DHT échoue définitivement (réseau partitionné), le natif possède
l'entrée mais les autres natifs qui n'ont pas reçu le message PubSub ne la connaissent
jamais — incohérence silencieuse.
- ⚠️ `RegisterWithNative` ignore les adresses en `127.0.0.1`, mais ne gère pas
les adresses privées (RFC1918) qui seraient non routables depuis d'autres hôtes.
---
### 3.4 Pool d'indexeurs : fetch + consensus
**Fonctionnement**
Lors de `ConnectToNatives` (démarrage ou replenish), le nœud/indexeur :
1. **Fetch** : envoie `GetIndexersRequest` au premier natif répondant
(`/opencloud/native/indexers/1.0`), reçoit une liste de candidats.
2. **Consensus (round 1)** : interroge **tous** les natifs configurés en parallèle
(`/opencloud/native/consensus/1.0`, timeout 3 s, collecte sur 4 s).
Un indexeur est confirmé si **strictement plus de 50 %** des natifs répondants
le considèrent vivant.
3. **Consensus (round 2)** : si le pool est insuffisant, les suggestions des natifs
(indexeurs qu'ils connaissent mais qui n'étaient pas dans les candidats initiaux)
sont soumises à un second round.
**Avantages**
- La règle de majorité absolue empêche un natif compromis ou désynchronisé d'injecter
des indexeurs fantômes.
- Le double round permet de compléter le pool avec des alternatives connues des natifs
sans sacrifier la vérification.
- Si le fetch retourne un **fallback** (natif comme indexeur), le consensus est skippé —
cohérent car il n'y a qu'une seule source.
**Limites / risques**
- ⚠️ Avec **un seul natif** configuré (très courant en dev/test), le consensus est trivial
(100 % d'un seul vote) — la règle de majorité ne protège rien dans ce cas.
- ⚠️ `fetchIndexersFromNative` s'arrête au **premier natif répondant** (séquentiellement) :
si ce natif a un cache périmé ou partiel, le nœud obtient un pool sous-optimal sans
consulter les autres.
- ⚠️ Le timeout de collecte global (4 s) est fixe : sur un réseau lent ou géographiquement
distribué, des natifs valides peuvent être éliminés faute de réponse à temps.
- ⚠️ `replaceStaticIndexers` **ajoute** sans jamais retirer d'anciens indexeurs expirés :
le pool peut accumuler des entrées mortes que seul le heartbeat purge ensuite.
---
### 3.5 Self-delegation et offload loop
**Fonctionnement**
Si un natif ne dispose d'aucun indexeur vivant lors d'un `handleNativeGetIndexers`,
il se désigne lui-même comme indexeur temporaire (`selfDelegate`) : il retourne sa propre
adresse multiaddr et ajoute le demandeur dans `responsiblePeers`, dans la limite de
`maxFallbackPeers` (50). Au-delà, la délégation est refusée et une réponse vide est
retournée pour que le nœud tente un autre natif.
Toutes les 30 s, `runOffloadLoop` vérifie si des indexeurs réels sont de nouveau
disponibles. Si oui, pour chaque peer responsable :
- **Stream présent** : `Reset()` du stream heartbeat — le peer reçoit une erreur,
déclenche `replenishIndexersFromNative` et migre vers de vrais indexeurs.
- **Stream absent** (peer jamais admis par le scoring) : `ClosePeer()` sur la connexion
réseau — le peer reconnecte et re-demande ses indexeurs au natif.
**Avantages**
- Continuité de service : un nœud n'est jamais bloqué en l'absence temporaire d'indexeurs.
- La migration est automatique et transparente pour le nœud.
- `Reset()` (vs `Close()`) interrompt les deux sens du stream, garantissant que le peer
reçoit bien une erreur.
- La limite de 50 empêche le natif de se retrouver surchargé lors de pénuries prolongées.
**Limites / risques**
-**Offload sans stream corrigé** : si le heartbeat n'avait jamais été enregistré dans
`StreamRecords` (score < seuil cas amplifié par le bug de scoring), l'offload
échouait silencieusement et le peer restait dans `responsiblePeers` indéfiniment.
Branche `else` : `ClosePeer()` + suppression de `responsiblePeers`.
- **`responsiblePeers` illimité corrigé** : le natif acceptait un nombre arbitraire
de peers en self-delegation, devenant lui-même un indexeur surchargé.
`selfDelegate` vérifie `len(responsiblePeers) >= maxFallbackPeers` et retourne
`false` si saturé.
- La délégation reste non coordonnée entre natifs : un natif surchargé refuse (retourne
vide) mais ne redirige pas explicitement vers un natif voisin qui aurait de la capacité.
---
### 3.6 Résilience du mesh natif
**Fonctionnement**
Quand le heartbeat vers un natif échoue, `replenishNativesFromPeers` tente de trouver
un remplaçant dans cet ordre :
1. `fetchNativeFromNatives` : demande à chaque natif vivant (`/opencloud/native/peers/1.0`)
une adresse de natif inconnue.
2. `fetchNativeFromIndexers` : demande à chaque indexeur connu
(`/opencloud/indexer/natives/1.0`) ses natifs configurés.
3. Si aucun remplaçant et `remaining ≤ 1` : `retryLostNative` relance un ticker de 30 s
qui retente la connexion directe au natif perdu.
`EnsureNativePeers` maintient des heartbeats de natif à natif via `ProtocolHeartbeat`,
avec une **unique goroutine** couvrant toute la map `StaticNatives`.
**Avantages**
- Le gossip multi-hop via indexeurs permet de retrouver un natif même si aucun pair
direct ne le connaît.
- `retryLostNative` gère le cas d'un seul natif (déploiement minimal).
- La reconnexion automatique (`retryLostNative`) déclenche `replenishIndexersIfNeeded`
pour restaurer aussi le pool d'indexeurs.
**Limites / risques**
- **Goroutines heartbeat multiples corrigé** : `EnsureNativePeers` démarrait une
goroutine `SendHeartbeat` par adresse native (N natifs N goroutines N² heartbeats
par tick). Utilisation de `nativeMeshHeartbeatOnce` : une seule goroutine itère sur
`StaticNatives`.
- `retryLostNative` tourne indéfiniment sans condition d'arrêt liée à la vie du processus
(pas de `context.Context`). Si le binaire est gracefully shutdown, cette goroutine
peut bloquer.
- La découverte transitoire (natif indexeur natif) est à sens unique : un indexeur
ne connaît que les natifs de sa propre config, pas les nouveaux natifs qui auraient
rejoint après son démarrage.
---
### 3.7 DHT partagée
**Fonctionnement**
Tous les indexeurs et natifs participent à une DHT Kademlia (préfixe `oc`, mode
`ModeServer`). Deux namespaces sont utilisés :
- `/node/<DID>` `PeerRecord` JSON signé (publié par les indexeurs sur heartbeat de nœud).
- `/indexer/<PeerID>` `liveIndexerEntry` JSON avec TTL (publié par les natifs).
Chaque natif lance `refreshIndexersFromDHT` (toutes les 30 s) qui ré-hydrate son cache
local depuis la DHT pour les PeerIDs connus (`knownPeerIDs`) dont l'entrée locale a expiré.
**Avantages**
- Persistance décentralisée : un record survit à la perte d'un seul natif ou indexeur.
- Validation des entrées : `PeerRecordValidator` et `IndexerRecordValidator` rejettent
les records malformés ou expirés au moment du `PutValue`.
- L'index secondaire `/name/<name>` permet la résolution par nom humain.
**Limites / risques**
- La DHT Kademlia en réseau privé (PSK) est fonctionnelle mais les nœuds bootstrap
ne sont pas configurés explicitement : la découverte dépend de connexions déjà établies,
ce qui peut ralentir la convergence au démarrage.
- `PutValue` est réessayé en boucle infinie si `"failed to find any peer in table"`
une panne de réseau prolongée génère des goroutines bloquées.
- Si la PSK est compromise, un attaquant peut écrire dans la DHT ; les `liveIndexerEntry`
d'indexeurs ne sont pas signées, contrairement aux `PeerRecord`.
- `refreshIndexersFromDHT` prune `knownPeerIDs` si la DHT n'a aucune entrée fraîche,
mais ne prune pas `liveIndexers` une entrée expirée reste en mémoire jusqu'au GC
ou au prochain refresh.
---
### 3.8 PubSub gossip (indexer registry)
**Fonctionnement**
Quand un indexeur s'enregistre auprès d'un natif, ce dernier publie l'adresse sur le
topic GossipSub `oc-indexer-registry`. Les autres natifs abonnés mettent à jour leur
`knownPeerIDs` sans attendre la DHT.
Le `TopicValidator` rejette tout message dont le contenu n'est pas un multiaddr
parseable valide avant qu'il n'atteigne la boucle de traitement.
**Avantages**
- Dissémination quasi-instantanée entre natifs connectés.
- Complément utile à la DHT pour les registrations récentes qui n'ont pas encore
été persistées.
- Le filtre syntaxique bloque les messages malformés avant propagation dans le mesh.
**Limites / risques**
- **`TopicValidator` sans validation corrigé** : le validateur acceptait systématiquement
tous les messages (`return true`), permettant à un natif compromis de gossiper
n'importe quelle donnée.
Le validateur vérifie désormais que le message est un multiaddr parseable
(`pp.AddrInfoFromString`).
- La validation reste syntaxique uniquement : l'origine du message (l'émetteur
est-il un natif légitime ?) n'est pas vérifiée.
- Si le natif redémarre, il perd son abonnement et manque les messages publiés
pendant son absence. La re-hydratation depuis la DHT compense, mais avec un délai
pouvant aller jusqu'à 30 s.
- Le gossip ne porte que le `Addr` de l'indexeur, pas sa TTL ni sa signature.
---
### 3.9 Streams applicatifs (node ↔ node)
**Fonctionnement**
`StreamService` gère les streams entre nœuds partenaires (relations `PARTNER` stockées
en base) via des protocols dédiés (`/opencloud/resource/*`). Un heartbeat partenaire
(`ProtocolHeartbeatPartner`) maintient les connexions actives. Les events sont routés
via `handleEvent` et le système NATS en parallèle.
**Avantages**
- TTL par protocol (`PersistantStream`, `WaitResponse`) adapte le comportement au
type d'échange (longue durée pour le planner, courte pour les CRUDs).
- La GC (`gc()` toutes les 8 s, démarrée une seule fois dans `InitStream`) libère
rapidement les streams expirés.
**Limites / risques**
- **Fuite de goroutines GC corrigée** : `HandlePartnerHeartbeat` appelait
`go s.StartGC(30s)` à chaque heartbeat reçu (~20 s), créant un nouveau ticker
goroutine infini à chaque appel.
Appel supprimé ; la GC lancée par `InitStream` est suffisante.
- **Boucle infinie sur EOF corrigée** : `readLoop` effectuait `s.Stream.Close();
continue` après une erreur de décodage, re-tentant indéfiniment de lire un stream
fermé.
→ Remplacé par `return` ; les defers (`Close`, `delete`) nettoient correctement.
- ⚠️ La récupération de partenaires depuis `conf.PeerIDS` est marquée `TO REMOVE` :
présence de code provisoire en production.
---
## 4. Tableau récapitulatif
| Mécanisme | Protocole | Avantage principal | État du risque |
|---|---|---|---|
| Heartbeat node→indexer | `/opencloud/heartbeat/1.0` | Détection rapide de perte | ⚠️ Stream TCP gelé non détecté |
| Scoring de confiance | (inline dans heartbeat) | Filtre les pairs instables | ✅ Deadlock corrigé (seuil 40/75) |
| Enregistrement natif | `/opencloud/native/subscribe/1.0` | TTL ample, cache immédiat | ✅ TTL porté à 90 s |
| Fetch pool d'indexeurs | `/opencloud/native/indexers/1.0` | Prend le 1er natif répondant | ⚠️ Natif au cache périmé possible |
| Consensus | `/opencloud/native/consensus/1.0` | Majorité absolue | ⚠️ Trivial avec 1 seul natif |
| Self-delegation + offload | (in-memory) | Disponibilité sans indexeur | ✅ Limite 50 peers + ClosePeer |
| Mesh natif | `/opencloud/native/peers/1.0` | Gossip multi-hop | ✅ Goroutines dédupliquées |
| DHT | `/oc/kad/1.0.0` | Persistance décentralisée | ⚠️ Retry infini, pas de bootstrap |
| PubSub registry | `oc-indexer-registry` | Dissémination rapide | ✅ Validation multiaddr |
| Streams applicatifs | `/opencloud/resource/*` | TTL par protocol | ✅ Fuite GC + EOF corrigés |
---
## 5. Risques et limites globaux
### Sécurité
- ⚠️ **Adresses auto-rapportées non vérifiées** : le champ `IndexersBinded` dans le heartbeat
est auto-déclaré par le nœud et sert à calculer la diversité. Un pair malveillant peut
gonfler son score en déclarant de fausses adresses.
- ⚠️ **PSK comme seule barrière d'entrée** : si la PSK est compromise (elle est statique et
fichier-based), tout l'isolement réseau saute. Il n'y a pas de rotation de clé ni
d'authentification supplémentaire par pair.
- ⚠️ **DHT sans ACL sur les entrées indexeur** : la signature des `PeerRecord` est vérifiée
à la lecture, mais les `liveIndexerEntry` ne sont pas signées. La validation PubSub
bloque les multiaddrs invalides mais pas les adresses d'indexeurs légitimes usurpées.
### Disponibilité
- ⚠️ **Single point of failure natif** : avec un seul natif, la perte de celui-ci stoppe
toute attribution d'indexeurs. `retryLostNative` pallie, mais sans indexeurs, les nœuds
ne peuvent pas publier.
- ⚠️ **Bootstrap DHT** : sans nœuds bootstrap explicites, la DHT met du temps à converger
si les connexions initiales sont peu nombreuses.
### Cohérence
- ⚠️ **`replaceStaticIndexers` n'efface jamais** : d'anciens indexeurs morts restent dans
`StaticIndexers` jusqu'à ce que le heartbeat échoue. Un nœud peut avoir un pool
surévalué contenant des entrées inatteignables.
- ⚠️ **`TimeWatcher` global** : défini une seule fois au démarrage de `ConnectToIndexers`.
Si l'indexeur tourne depuis longtemps, les nouveaux nœuds auront un `uptime_ratio`
durablement faible. Le seuil abaissé à 40 pour le premier heartbeat atténue l'impact
initial, mais les heartbeats suivants devront accumuler un uptime suffisant.
---
## 6. Pistes d'amélioration
Les pistes déjà implémentées sont marquées ✅. Les pistes ouvertes restent à traiter.
### ✅ Score : double seuil pour les nouveaux peers
~~Remplacer le seuil binaire~~ — **Implémenté** : seuil à 40 pour le premier heartbeat
(peer absent de `StreamRecords`), 75 pour les suivants. Un peer peut désormais être admis
dès sa première connexion sans bloquer sur l'uptime nul.
_Fichier : `common/common_stream.go`, `CheckHeartbeat`_
### ✅ TTL indexeur aligné avec l'intervalle de renouvellement
~~TTL de 66 s trop proche de 60 s~~ — **Implémenté** : `IndexerTTL` passé à **90 s**.
_Fichier : `indexer/native.go`_
### ✅ Limite de la self-delegation
~~`responsiblePeers` illimité~~ — **Implémenté** : `selfDelegate` retourne `false` quand
`len(responsiblePeers) >= maxFallbackPeers` (50). Le site d'appel retourne une réponse
vide et logue un warning.
_Fichier : `indexer/native.go`_
### ✅ Validation PubSub des adresses gossipées
~~`TopicValidator` accepte tout~~ — **Implémenté** : le validateur vérifie que le message
est un multiaddr parseable via `pp.AddrInfoFromString`.
_Fichier : `indexer/native.go`, `subscribeIndexerRegistry`_
### ✅ Goroutines heartbeat dédupliquées dans `EnsureNativePeers`
~~Une goroutine par adresse native~~ — **Implémenté** : `nativeMeshHeartbeatOnce`
garantit qu'une seule goroutine `SendHeartbeat` couvre toute la map `StaticNatives`.
_Fichier : `common/native_stream.go`_
### ✅ Fuite de goroutines GC dans `HandlePartnerHeartbeat`
~~`go s.StartGC(30s)` à chaque heartbeat~~ — **Implémenté** : appel supprimé ; la GC
de `InitStream` est suffisante.
_Fichier : `stream/service.go`_
### ✅ Boucle infinie sur EOF dans `readLoop`
~~`continue` après `Stream.Close()`~~ — **Implémenté** : remplacé par `return` pour
laisser les defers nettoyer proprement.
_Fichier : `stream/service.go`_
---
### ⚠️ Fetch pool : interroger tous les natifs en parallèle
`fetchIndexersFromNative` s'arrête au premier natif répondant. Interroger tous les natifs
en parallèle et fusionner les listes (similairement à `clientSideConsensus`) éviterait
qu'un natif au cache périmé fournisse un pool sous-optimal.
### ⚠️ Consensus avec quorum configurable
Le seuil de confirmation (`count*2 > total`) est câblé en dur. Le rendre configurable
(ex. `consensus_quorum: 0.67`) permettrait de durcir la règle sur des déploiements
à 3+ natifs sans modifier le code.
### ⚠️ Désenregistrement explicite
Ajouter un protocole `/opencloud/native/unsubscribe/1.0` : quand un indexeur s'arrête
proprement, il notifie les natifs pour invalider son TTL immédiatement plutôt qu'attendre
90 s.
### ⚠️ Bootstrap DHT explicite
Configurer les natifs comme nœuds bootstrap DHT via `dht.BootstrapPeers` pour accélérer
la convergence Kademlia au démarrage.
### ⚠️ Context propagé dans les goroutines longue durée
`retryLostNative`, `refreshIndexersFromDHT` et `runOffloadLoop` ne reçoivent aucun
`context.Context`. Les passer depuis `InitNative` permettrait un arrêt propre lors du
shutdown du processus.
### ⚠️ Redirection explicite lors du refus de self-delegation
Quand un natif refuse la self-delegation (pool saturé), retourner vide force le nœud à
réessayer sans lui indiquer vers qui se tourner. Une liste de natifs alternatifs dans la
réponse (`AlternativeNatives []string`) permettrait au nœud de trouver directement un
natif moins chargé.

View File

@@ -21,11 +21,6 @@ RUN go mod download
FROM golang:alpine AS builder
ARG CONF_NUM
# Fail fast if CONF_NUM missing
RUN test -n "$CONF_NUM"
RUN apk add --no-cache git
WORKDIR /oc-discovery
# Reuse Go cache
@@ -55,13 +50,13 @@ WORKDIR /app
RUN mkdir ./pem
COPY --from=builder /app/extracted/pem/private${CONF_NUM}.pem ./pem/private.pem
COPY --from=builder /app/extracted/pem/private${CONF_NUM:-1}.pem ./pem/private.pem
COPY --from=builder /app/extracted/psk ./psk
COPY --from=builder /app/extracted/pem/public${CONF_NUM}.pem ./pem/public.pem
COPY --from=builder /app/extracted/pem/public${CONF_NUM:-1}.pem ./pem/public.pem
COPY --from=builder /app/extracted/oc-discovery /usr/bin/oc-discovery
COPY --from=builder /app/extracted/docker_discovery${CONF_NUM}.json /etc/oc/discovery.json
COPY --from=builder /app/extracted/docker_discovery${CONF_NUM:-1}.json /etc/oc/discovery.json
EXPOSE 400${CONF_NUM}
EXPOSE 400${CONF_NUM:-1}
ENTRYPOINT ["oc-discovery"]

View File

@@ -10,15 +10,17 @@ clean:
rm -rf oc-discovery
docker:
DOCKER_BUILDKIT=1 docker build -t oc/oc-discovery:0.0.1 -f Dockerfile .
docker tag oc/oc-discovery:0.0.1 oc/oc-discovery:latest
DOCKER_BUILDKIT=1 docker build -t oc-discovery -f Dockerfile .
docker tag oc-discovery opencloudregistry/oc-discovery:latest
publish-kind:
kind load docker-image oc/oc-discovery:0.0.1 --name opencloud
kind load docker-image opencloudregistry/oc-discovery:latest --name opencloud
publish-registry:
@echo "TODO"
docker push opencloudregistry/oc-discovery:latest
all: docker publish-kind publish-registry
all: docker publish-kind
ci: docker publish-registry
.PHONY: build run clean docker publish-kind publish-registry

View File

@@ -14,3 +14,34 @@ If default Swagger page is displayed instead of tyour api, change url in swagger
url: "swagger.json"
sequenceDiagram
autonumber
participant Dev as Développeur / Owner
participant IPFS as Réseau IPFS
participant CID as CID (hash du fichier)
participant Argo as Orchestrateur Argo
participant CU as Compute Unit
participant MinIO as Storage MinIO
%% 1. Ajout du fichier sur IPFS
Dev->>IPFS: Chiffre et ajoute fichier (algo/dataset)
IPFS-->>CID: Génère CID unique (hash du fichier)
Dev->>Dev: Stocke CID pour référence future
%% 2. Orchestration par Argo
Argo->>CID: Requête CID pour job
CID-->>Argo: Fournit le fichier (vérifié via hash)
%% 3. Execution sur la Compute Unit
Argo->>CU: Déploie job avec fichier récupéré
CU->>CU: Vérifie hash (CID) pour intégrité
CU->>CU: Exécute l'algo sur le dataset
%% 4. Stockage des résultats
CU->>MinIO: Stocke output (résultats) ou logs
CU->>IPFS: Optionnel : ajoute output sur IPFS (nouveau CID)
%% 5. Vérification et traçabilité
Dev->>IPFS: Vérifie CID output si nécessaire
CU->>Dev: Fournit résultat et log de hash

View File

@@ -3,17 +3,21 @@ package conf
import "sync"
type Config struct {
Name string
Hostname string
PSKPath string
PublicKeyPath string
PrivateKeyPath string
NodeEndpointPort int64
IndexerAddresses string
Name string
Hostname string
PSKPath string
PublicKeyPath string
PrivateKeyPath string
NodeEndpointPort int64
IndexerAddresses string
NativeIndexerAddresses string // multiaddrs of native indexers, comma-separated; bypasses IndexerAddresses when set
PeerIDS string // TO REMOVE
NodeMode string
MinIndexer int
MaxIndexer int
}
var instance *Config

View File

@@ -28,7 +28,7 @@ type Event struct {
}
func NewEvent(name string, from string, dt *tools.DataType, user string, payload []byte) *Event {
priv, err := LoadKeyFromFilePrivate() // your node private key
priv, err := tools.LoadKeyFromFilePrivate() // your node private key
if err != nil {
return nil
}
@@ -73,7 +73,11 @@ func (event *Event) Verify(p *peer.Peer) error {
if p.Relation == peer.BLACKLIST { // if peer is blacklisted... quit...
return errors.New("peer is blacklisted")
}
pubKey, err := PubKeyFromString(p.PublicKey) // extract pubkey from pubkey str
return event.VerifySignature(p.PublicKey)
}
func (event *Event) VerifySignature(pk string) error {
pubKey, err := PubKeyFromString(pk) // extract pubkey from pubkey str
if err != nil {
return errors.New("pubkey is malformed")
}
@@ -88,11 +92,11 @@ func (event *Event) Verify(p *peer.Peer) error {
}
type TopicNodeActivityPub struct {
NodeActivity peer.PeerState
Disposer pp.AddrInfo `json:"disposer_address"`
Name string `json:"name"`
DID string `json:"did"` // real PEER ID
PeerID string `json:"peer_id"`
NodeActivity int `json:"node_activity"`
Disposer string `json:"disposer_address"`
Name string `json:"name"`
DID string `json:"did"` // real PEER ID
PeerID string `json:"peer_id"`
}
type LongLivedPubSubService struct {
@@ -159,20 +163,16 @@ func (s *LongLivedPubSubService) SubscribeToSearch(ps *pubsub.PubSub, f *func(co
func SubscribeEvents[T interface{}](s *LongLivedPubSubService,
ctx context.Context, proto string, timeout int, f func(context.Context, T, string),
) error {
s.PubsubMu.Lock()
if s.LongLivedPubSubs[proto] == nil {
s.PubsubMu.Unlock()
return errors.New("no protocol subscribed in pubsub")
}
topic := s.LongLivedPubSubs[proto]
s.PubsubMu.Unlock()
sub, err := topic.Subscribe() // then subscribe to it
if err != nil {
return err
}
// launch loop waiting for results.
go waitResults[T](s, ctx, sub, proto, timeout, f)
go waitResults(s, ctx, sub, proto, timeout, f)
return nil
}
@@ -207,10 +207,5 @@ func waitResults[T interface{}](s *LongLivedPubSubService, ctx context.Context,
continue
}
f(ctx, evt, fmt.Sprintf("%v", proto))
/*if p, err := ps.Node.GetPeerRecord(ctx, evt.From); err == nil && len(p) > 0 {
if err := ps.processEvent(ctx, p[0], &evt, topicName); err != nil {
logger.Err(err)
}
}*/
}
}

View File

@@ -2,16 +2,20 @@ package common
import (
"context"
cr "crypto/rand"
"encoding/json"
"errors"
"fmt"
"io"
"math/rand"
"net"
"oc-discovery/conf"
"slices"
"strings"
"sync"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
peer "cloud.o-forge.io/core/oc-lib/models/peer"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/network"
pp "github.com/libp2p/go-libp2p/core/peer"
@@ -20,18 +24,22 @@ import (
type LongLivedStreamRecordedService[T interface{}] struct {
*LongLivedPubSubService
StreamRecords map[protocol.ID]map[pp.ID]*StreamRecord[T]
StreamMU sync.RWMutex
maxNodesConn int
isBidirectionnal bool
StreamRecords map[protocol.ID]map[pp.ID]*StreamRecord[T]
StreamMU sync.RWMutex
maxNodesConn int
// AfterHeartbeat is an optional hook called after each successful heartbeat update.
// The indexer sets it to republish the embedded signed record to the DHT.
AfterHeartbeat func(pid pp.ID)
// AfterDelete is called after gc() evicts an expired peer, outside the lock.
// name and did may be empty if the HeartbeatStream had no metadata.
AfterDelete func(pid pp.ID, name string, did string)
}
func NewStreamRecordedService[T interface{}](h host.Host, maxNodesConn int, isBidirectionnal bool) *LongLivedStreamRecordedService[T] {
func NewStreamRecordedService[T interface{}](h host.Host, maxNodesConn int) *LongLivedStreamRecordedService[T] {
service := &LongLivedStreamRecordedService[T]{
LongLivedPubSubService: NewLongLivedPubSubService(h),
StreamRecords: map[protocol.ID]map[pp.ID]*StreamRecord[T]{},
maxNodesConn: maxNodesConn,
isBidirectionnal: isBidirectionnal,
}
go service.StartGC(30 * time.Second)
// Garbage collection is needed on every Map of Long-Lived Stream... it may be a top level redesigned
@@ -51,37 +59,41 @@ func (ix *LongLivedStreamRecordedService[T]) StartGC(interval time.Duration) {
func (ix *LongLivedStreamRecordedService[T]) gc() {
ix.StreamMU.Lock()
defer ix.StreamMU.Unlock()
now := time.Now().UTC()
if ix.StreamRecords[ProtocolHeartbeat] == nil {
ix.StreamRecords[ProtocolHeartbeat] = map[pp.ID]*StreamRecord[T]{}
ix.StreamMU.Unlock()
return
}
streams := ix.StreamRecords[ProtocolHeartbeat]
fmt.Println(StaticNatives, StaticIndexers, streams)
type gcEntry struct {
pid pp.ID
name string
did string
}
var evicted []gcEntry
for pid, rec := range streams {
if now.After(rec.HeartbeatStream.Expiry) || now.Sub(rec.LastSeen) > 2*rec.HeartbeatStream.Expiry.Sub(now) {
if now.After(rec.HeartbeatStream.Expiry) || now.Sub(rec.HeartbeatStream.UptimeTracker.LastSeen) > 2*rec.HeartbeatStream.Expiry.Sub(now) {
name, did := "", ""
if rec.HeartbeatStream != nil {
name = rec.HeartbeatStream.Name
did = rec.HeartbeatStream.DID
}
evicted = append(evicted, gcEntry{pid, name, did})
for _, sstreams := range ix.StreamRecords {
if sstreams[pid] != nil {
delete(sstreams, pid)
}
}
ix.PubsubMu.Lock()
if ix.LongLivedPubSubs[TopicPubSubNodeActivity] != nil {
ad, err := pp.AddrInfoFromString("/ip4/" + conf.GetConfig().Hostname + " /tcp/" + fmt.Sprintf("%v", conf.GetConfig().NodeEndpointPort) + " /p2p/" + ix.Host.ID().String())
if err == nil {
if b, err := json.Marshal(TopicNodeActivityPub{
Disposer: *ad,
Name: rec.HeartbeatStream.Name,
DID: rec.HeartbeatStream.DID,
PeerID: pid.String(),
NodeActivity: peer.OFFLINE,
}); err == nil {
ix.LongLivedPubSubs[TopicPubSubNodeActivity].Publish(context.Background(), b)
}
}
}
ix.PubsubMu.Unlock()
}
}
ix.StreamMU.Unlock()
if ix.AfterDelete != nil {
for _, e := range evicted {
ix.AfterDelete(e.pid, e.name, e.did)
}
}
}
@@ -114,67 +126,237 @@ func (ix *LongLivedStreamRecordedService[T]) snapshot() []*StreamRecord[T] {
return out
}
func (ix *LongLivedStreamRecordedService[T]) HandleNodeHeartbeat(s network.Stream) {
func (ix *LongLivedStreamRecordedService[T]) HandleHeartbeat(s network.Stream) {
logger := oclib.GetLogger()
defer s.Close()
dec := json.NewDecoder(s)
for {
pid, hb, err := CheckHeartbeat(ix.Host, s, ix.maxNodesConn)
if err != nil {
continue
}
ix.StreamMU.Lock()
if ix.StreamRecords[ProtocolHeartbeat] == nil {
ix.StreamRecords[ProtocolHeartbeat] = map[pp.ID]*StreamRecord[T]{}
}
streams := ix.StreamRecords[ProtocolHeartbeat]
streamsAnonym := map[pp.ID]HeartBeatStreamed{}
for k, v := range streams {
streamsAnonym[k] = v
}
ix.StreamMU.Unlock()
pid, hb, err := CheckHeartbeat(ix.Host, s, dec, streamsAnonym, &ix.StreamMU, ix.maxNodesConn)
if err != nil {
// Stream-level errors (EOF, reset, closed) mean the connection is gone
// — exit so the goroutine doesn't spin forever on a dead stream.
// Metric/policy errors (score too low, too many connections) are transient
// — those are also stream-terminal since the stream carries one session.
if errors.Is(err, io.EOF) || errors.Is(err, io.ErrUnexpectedEOF) ||
strings.Contains(err.Error(), "reset") ||
strings.Contains(err.Error(), "closed") ||
strings.Contains(err.Error(), "too many connections") {
logger.Info().Err(err).Msg("heartbeat stream terminated, closing handler")
return
}
logger.Warn().Err(err).Msg("heartbeat check failed, retrying on same stream")
continue
}
ix.StreamMU.Lock()
// if record already seen update last seen
if rec, ok := streams[*pid]; ok {
rec.DID = hb.DID
rec.Stream = s
if rec.HeartbeatStream == nil {
rec.HeartbeatStream = hb.Stream
}
rec.HeartbeatStream = hb.Stream
rec.LastSeen = time.Now().UTC()
if rec.HeartbeatStream.UptimeTracker == nil {
rec.HeartbeatStream.UptimeTracker = &UptimeTracker{
FirstSeen: time.Now().UTC(),
LastSeen: time.Now().UTC(),
}
}
logger.Info().Msg("A new node is updated : " + pid.String())
} else {
hb.Stream.UptimeTracker = &UptimeTracker{
FirstSeen: time.Now().UTC(),
LastSeen: time.Now().UTC(),
}
streams[*pid] = &StreamRecord[T]{
DID: hb.DID,
HeartbeatStream: hb.Stream,
Stream: s,
LastSeen: time.Now().UTC(),
}
logger.Info().Msg("A new node is subscribed : " + pid.String())
}
ix.StreamMU.Unlock()
// Let the indexer republish the embedded signed record to the DHT.
if ix.AfterHeartbeat != nil {
ix.AfterHeartbeat(*pid)
}
}
}
func CheckHeartbeat(h host.Host, s network.Stream, maxNodes int) (*pp.ID, *Heartbeat, error) {
func CheckHeartbeat(h host.Host, s network.Stream, dec *json.Decoder, streams map[pp.ID]HeartBeatStreamed, lock *sync.RWMutex, maxNodes int) (*pp.ID, *Heartbeat, error) {
if len(h.Network().Peers()) >= maxNodes {
return nil, nil, fmt.Errorf("too many connections, try another indexer")
}
var hb Heartbeat
if err := json.NewDecoder(s).Decode(&hb); err != nil {
if err := dec.Decode(&hb); err != nil {
return nil, nil, err
}
pid, err := pp.Decode(hb.PeerID)
hb.Stream = &Stream{
Name: hb.Name,
DID: hb.DID,
Stream: s,
Expiry: time.Now().UTC().Add(2 * time.Minute),
} // here is the long-lived bidirectionnal heart bit.
return &pid, &hb, err
_, bpms, _ := getBandwidthChallengeRate(h, s.Conn().RemotePeer(), MinPayloadChallenge+int(rand.Float64()*(MaxPayloadChallenge-MinPayloadChallenge)))
{
pid, err := pp.Decode(hb.PeerID)
if err != nil {
return nil, nil, err
}
upTime := float64(0)
isFirstHeartbeat := true
lock.Lock()
if rec, ok := streams[pid]; ok && rec.GetUptimeTracker() != nil {
upTime = rec.GetUptimeTracker().Uptime().Hours() / float64(time.Since(TimeWatcher).Hours())
isFirstHeartbeat = false
}
lock.Unlock()
diversity := getDiversityRate(h, hb.IndexersBinded)
fmt.Println(upTime, bpms, diversity)
hb.ComputeIndexerScore(upTime, bpms, diversity)
// First heartbeat: uptime is always 0 so the score ceiling is 60, below the
// steady-state threshold of 75. Use a lower admission threshold so new peers
// can enter and start accumulating uptime. Subsequent heartbeats must meet
// the full threshold once uptime is tracked.
minScore := float64(50)
if isFirstHeartbeat {
minScore = 40
}
fmt.Println(hb.Score, minScore)
if hb.Score < minScore {
return nil, nil, errors.New("not enough trusting value")
}
hb.Stream = &Stream{
Name: hb.Name,
DID: hb.DID,
Stream: s,
Expiry: time.Now().UTC().Add(2 * time.Minute),
} // here is the long-lived bidirectionnal heart bit.
return &pid, &hb, err
}
}
func getDiversityRate(h host.Host, peers []string) float64 {
peers, _ = checkPeers(h, peers)
diverse := []string{}
for _, p := range peers {
ip, err := ExtractIP(p)
if err != nil {
fmt.Println("NO IP", p, err)
continue
}
div := ip.Mask(net.CIDRMask(24, 32)).String()
if !slices.Contains(diverse, div) {
diverse = append(diverse, div)
}
}
if len(diverse) == 0 || len(peers) == 0 {
return 1
}
return float64(len(diverse) / len(peers))
}
func checkPeers(h host.Host, peers []string) ([]string, []string) {
concretePeer := []string{}
ips := []string{}
for _, p := range peers {
ad, err := pp.AddrInfoFromString(p)
if err != nil {
continue
}
if PeerIsAlive(h, *ad) {
concretePeer = append(concretePeer, p)
if ip, err := ExtractIP(p); err == nil {
ips = append(ips, ip.Mask(net.CIDRMask(24, 32)).String())
}
}
}
return concretePeer, ips
}
const MaxExpectedMbps = 100.0
const MinPayloadChallenge = 512
const MaxPayloadChallenge = 2048
const BaseRoundTrip = 400 * time.Millisecond
// getBandwidthChallengeRate opens a dedicated ProtocolBandwidthProbe stream to
// remotePeer, sends a random payload, reads the echo, and computes throughput.
// Using a separate stream avoids mixing binary data on the JSON heartbeat stream
// and ensures the echo handler is actually running on the remote side.
func getBandwidthChallengeRate(h host.Host, remotePeer pp.ID, payloadSize int) (bool, float64, error) {
payload := make([]byte, payloadSize)
if _, err := cr.Read(payload); err != nil {
return false, 0, err
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
s, err := h.NewStream(ctx, remotePeer, ProtocolBandwidthProbe)
if err != nil {
return false, 0, err
}
defer s.Reset()
s.SetDeadline(time.Now().Add(10 * time.Second))
start := time.Now()
if _, err = s.Write(payload); err != nil {
return false, 0, err
}
s.CloseWrite()
// Half-close the write side so the handler's io.Copy sees EOF and stops.
// Read the echo.
response := make([]byte, payloadSize)
if _, err = io.ReadFull(s, response); err != nil {
return false, 0, err
}
duration := time.Since(start)
maxRoundTrip := BaseRoundTrip + (time.Duration(payloadSize) * (100 * time.Millisecond))
mbps := float64(payloadSize*8) / duration.Seconds() / 1e6
if duration > maxRoundTrip || mbps < 5.0 {
return false, float64(mbps / MaxExpectedMbps), nil
}
return true, float64(mbps / MaxExpectedMbps), nil
}
type UptimeTracker struct {
FirstSeen time.Time
LastSeen time.Time
}
func (u *UptimeTracker) Uptime() time.Duration {
return time.Since(u.FirstSeen)
}
func (u *UptimeTracker) IsEligible(min time.Duration) bool {
return u.Uptime() >= min
}
type StreamRecord[T interface{}] struct {
DID string
HeartbeatStream *Stream
Stream network.Stream
Record T
LastSeen time.Time // to check expiry
}
func (s *StreamRecord[T]) GetUptimeTracker() *UptimeTracker {
if s.HeartbeatStream == nil {
return nil
}
return s.HeartbeatStream.UptimeTracker
}
type Stream struct {
Name string `json:"name"`
DID string `json:"did"`
Stream network.Stream
Expiry time.Time `json:"expiry"`
Name string `json:"name"`
DID string `json:"did"`
Stream network.Stream
Expiry time.Time `json:"expiry"`
UptimeTracker *UptimeTracker
}
func (s *Stream) GetUptimeTracker() *UptimeTracker {
return s.UptimeTracker
}
func NewStream[T interface{}](s network.Stream, did string, record T) *Stream {
@@ -230,47 +412,71 @@ const (
ProtocolGet = "/opencloud/record/get/1.0"
)
var StaticIndexers []*pp.AddrInfo = []*pp.AddrInfo{}
var TimeWatcher time.Time
var StaticIndexers map[string]*pp.AddrInfo = map[string]*pp.AddrInfo{}
var StreamMuIndexes sync.RWMutex
var StreamIndexers ProtocolStream = ProtocolStream{}
func ConnectToIndexers(h host.Host, minIndexer int, maxIndexer int, myPID pp.ID) {
// indexerHeartbeatNudge allows replenishIndexersFromNative to trigger an immediate
// heartbeat tick after adding new entries to StaticIndexers, without waiting up
// to 20s for the regular ticker. Buffered(1) so the sender never blocks.
var indexerHeartbeatNudge = make(chan struct{}, 1)
// NudgeIndexerHeartbeat signals the indexer heartbeat goroutine to fire immediately.
func NudgeIndexerHeartbeat() {
select {
case indexerHeartbeatNudge <- struct{}{}:
default: // nudge already pending, skip
}
}
func ConnectToIndexers(h host.Host, minIndexer int, maxIndexer int, myPID pp.ID, recordFn ...func() json.RawMessage) error {
TimeWatcher = time.Now().UTC()
logger := oclib.GetLogger()
ctx := context.Background()
// If native addresses are configured, get the indexer pool from the native mesh,
// then start the long-lived heartbeat goroutine toward those indexers.
if conf.GetConfig().NativeIndexerAddresses != "" {
if err := ConnectToNatives(h, minIndexer, maxIndexer, myPID); err != nil {
return err
}
// Step 2: start the long-lived heartbeat goroutine toward the indexer pool.
// replaceStaticIndexers/replenishIndexersFromNative update the map in-place
// so this single goroutine follows all pool changes automatically.
logger.Info().Msg("[native] step 2 — starting long-lived heartbeat to indexer pool")
SendHeartbeat(context.Background(), ProtocolHeartbeat, conf.GetConfig().Name,
h, StreamIndexers, StaticIndexers, &StreamMuIndexes, 20*time.Second, recordFn...)
return nil
}
addresses := strings.Split(conf.GetConfig().IndexerAddresses, ",")
if len(addresses) > maxIndexer {
addresses = addresses[0:maxIndexer]
}
StreamMuIndexes.Lock()
for _, indexerAddr := range addresses {
ad, err := pp.AddrInfoFromString(indexerAddr)
if err != nil {
logger.Err(err)
continue
}
if h.Network().Connectedness(ad.ID) != network.Connected {
if err := h.Connect(ctx, *ad); err != nil {
logger.Err(err)
continue
}
}
StaticIndexers = append(StaticIndexers, ad)
// make a privilege streams with indexer.
for _, proto := range []protocol.ID{ProtocolPublish, ProtocolGet, ProtocolHeartbeat} {
AddStreamProtocol(nil, StreamIndexers, h, proto, ad.ID, myPID, true, nil)
}
}
if len(StaticIndexers) == 0 {
logger.Err(errors.New("you run a node without indexers... your gonna be isolated."))
StaticIndexers[indexerAddr] = ad
}
indexerCount := len(StaticIndexers)
StreamMuIndexes.Unlock()
if len(StaticIndexers) < minIndexer {
// TODO : ask for unknown indexer.
SendHeartbeat(context.Background(), ProtocolHeartbeat, conf.GetConfig().Name, h, StreamIndexers, StaticIndexers, &StreamMuIndexes, 20*time.Second, recordFn...) // your indexer is just like a node for the next indexer.
if indexerCount < minIndexer {
return errors.New("you run a node without indexers... your gonna be isolated.")
}
SendHeartbeat(ctx, ProtocolHeartbeat, conf.GetConfig().Name, h, StreamIndexers, StaticIndexers, 20*time.Second) // your indexer is just like a node for the next indexer.
return nil
}
func AddStreamProtocol(ctx *context.Context, protoS ProtocolStream, h host.Host, proto protocol.ID, id pp.ID, mypid pp.ID, force bool, onStreamCreated *func(network.Stream)) ProtocolStream {
logger := oclib.GetLogger()
if onStreamCreated == nil {
f := func(s network.Stream) {
protoS[proto][id] = &Stream{
@@ -293,7 +499,7 @@ func AddStreamProtocol(ctx *context.Context, protoS ProtocolStream, h host.Host,
if protoS[proto][id] != nil {
protoS[proto][id].Expiry = time.Now().Add(2 * time.Minute)
} else {
fmt.Println("GENERATE STREAM", proto, id)
logger.Info().Msg("NEW STREAM Generated" + fmt.Sprintf("%v", proto) + " " + id.String())
s, err := h.NewStream(*ctx, id, proto)
if err != nil {
panic(err.Error())
@@ -305,11 +511,23 @@ func AddStreamProtocol(ctx *context.Context, protoS ProtocolStream, h host.Host,
}
type Heartbeat struct {
Name string `json:"name"`
Stream *Stream `json:"stream"`
DID string `json:"did"`
PeerID string `json:"peer_id"`
Timestamp int64 `json:"timestamp"`
Name string `json:"name"`
Stream *Stream `json:"stream"`
DID string `json:"did"`
PeerID string `json:"peer_id"`
Timestamp int64 `json:"timestamp"`
IndexersBinded []string `json:"indexers_binded"`
Score float64
// Record carries a fresh signed PeerRecord (JSON) so the receiving indexer
// can republish it to the DHT without an extra round-trip.
// Only set by nodes (not indexers heartbeating other indexers).
Record json.RawMessage `json:"record,omitempty"`
}
func (hb *Heartbeat) ComputeIndexerScore(uptimeHours float64, bpms float64, diversity float64) {
hb.Score = ((0.3 * uptimeHours) +
(0.3 * bpms) +
(0.4 * diversity)) * 100
}
type HeartbeatInfo []struct {
@@ -318,25 +536,213 @@ type HeartbeatInfo []struct {
const ProtocolHeartbeat = "/opencloud/heartbeat/1.0"
func SendHeartbeat(ctx context.Context, proto protocol.ID, name string, h host.Host, ps ProtocolStream, peers []*pp.AddrInfo, interval time.Duration) {
peerID, err := oclib.GenerateNodeID()
if err == nil {
panic("can't heartbeat daemon failed to start")
// ProtocolBandwidthProbe is a dedicated short-lived stream used exclusively
// for bandwidth/latency measurement. The handler echoes any bytes it receives.
// All nodes and indexers register this handler so peers can measure them.
const ProtocolBandwidthProbe = "/opencloud/probe/1.0"
// HandleBandwidthProbe echoes back everything written on the stream, then closes.
// It is registered by all participants so the measuring side (the heartbeat receiver)
// can open a dedicated probe stream and read the round-trip latency + throughput.
func HandleBandwidthProbe(s network.Stream) {
defer s.Close()
s.SetDeadline(time.Now().Add(10 * time.Second))
io.Copy(s, s) // echo every byte back to the sender
}
// SendHeartbeat starts a goroutine that sends periodic heartbeats to peers.
// recordFn, when provided, is called on each tick and its output is embedded in
// the heartbeat as a fresh signed PeerRecord so the receiving indexer can
// republish it to the DHT without an extra round-trip.
// Pass no recordFn (or nil) for indexer→indexer / native heartbeats.
func SendHeartbeat(ctx context.Context, proto protocol.ID, name string, h host.Host, ps ProtocolStream, peers map[string]*pp.AddrInfo, mu *sync.RWMutex, interval time.Duration, recordFn ...func() json.RawMessage) {
logger := oclib.GetLogger()
// isIndexerHB is true when this goroutine drives the indexer heartbeat.
// isNativeHB is true when it drives the native heartbeat.
isIndexerHB := mu == &StreamMuIndexes
isNativeHB := mu == &StreamNativeMu
var recFn func() json.RawMessage
if len(recordFn) > 0 {
recFn = recordFn[0]
}
go func() {
logger.Info().Str("proto", string(proto)).Int("peers", len(peers)).Msg("heartbeat started")
t := time.NewTicker(interval)
defer t.Stop()
// doTick sends one round of heartbeats to the current peer snapshot.
doTick := func() {
// Build the heartbeat payload — snapshot current indexer addresses.
StreamMuIndexes.RLock()
addrs := make([]string, 0, len(StaticIndexers))
for addr := range StaticIndexers {
addrs = append(addrs, addr)
}
StreamMuIndexes.RUnlock()
hb := Heartbeat{
Name: name,
PeerID: h.ID().String(),
Timestamp: time.Now().UTC().Unix(),
IndexersBinded: addrs,
}
if recFn != nil {
hb.Record = recFn()
}
// Snapshot the peer list under a read lock so we don't hold the
// write lock during network I/O.
if mu != nil {
mu.RLock()
}
snapshot := make([]*pp.AddrInfo, 0, len(peers))
for _, ix := range peers {
snapshot = append(snapshot, ix)
}
if mu != nil {
mu.RUnlock()
}
for _, ix := range snapshot {
wasConnected := h.Network().Connectedness(ix.ID) == network.Connected
if err := sendHeartbeat(ctx, h, proto, ix, hb, ps, interval*time.Second); err != nil {
// Step 3: heartbeat failed — remove from pool and trigger replenish.
logger.Info().Str("peer", ix.ID.String()).Str("proto", string(proto)).Msg("[native] step 3 — heartbeat failed, removing peer from pool")
// Remove the dead peer and clean up its stream.
// mu already covers ps when isIndexerHB (same mutex), so one
// lock acquisition is sufficient — no re-entrant double-lock.
if mu != nil {
mu.Lock()
}
if ps[proto] != nil {
if s, ok := ps[proto][ix.ID]; ok {
if s.Stream != nil {
s.Stream.Close()
}
delete(ps[proto], ix.ID)
}
}
lostAddr := ""
for addr, ad := range peers {
if ad.ID == ix.ID {
lostAddr = addr
delete(peers, addr)
break
}
}
need := conf.GetConfig().MinIndexer - len(peers)
remaining := len(peers)
if mu != nil {
mu.Unlock()
}
logger.Info().Int("remaining", remaining).Int("min", conf.GetConfig().MinIndexer).Int("need", need).Msg("[native] step 3 — pool state after removal")
// Step 4: ask the native for the missing indexer count.
if isIndexerHB && conf.GetConfig().NativeIndexerAddresses != "" {
if need < 1 {
need = 1
}
logger.Info().Int("need", need).Msg("[native] step 3→4 — triggering replenish")
go replenishIndexersFromNative(h, need)
}
// Native heartbeat failed — find a replacement native.
// Case 1: if the dead native was also serving as an indexer, evict it
// from StaticIndexers immediately without waiting for the indexer HB tick.
if isNativeHB {
logger.Info().Str("addr", lostAddr).Msg("[native] step 3 — native heartbeat failed, triggering native replenish")
if lostAddr != "" && conf.GetConfig().NativeIndexerAddresses != "" {
StreamMuIndexes.Lock()
if _, wasIndexer := StaticIndexers[lostAddr]; wasIndexer {
delete(StaticIndexers, lostAddr)
if s := StreamIndexers[ProtocolHeartbeat]; s != nil {
if stream, ok := s[ix.ID]; ok {
if stream.Stream != nil {
stream.Stream.Close()
}
delete(s, ix.ID)
}
}
idxNeed := conf.GetConfig().MinIndexer - len(StaticIndexers)
StreamMuIndexes.Unlock()
if idxNeed < 1 {
idxNeed = 1
}
logger.Info().Str("addr", lostAddr).Msg("[native] dead native evicted from indexer pool, triggering replenish")
go replenishIndexersFromNative(h, idxNeed)
} else {
StreamMuIndexes.Unlock()
}
}
go replenishNativesFromPeers(h, lostAddr, proto)
}
} else {
// Case 2: native-as-indexer reconnected after a restart.
// If the peer was disconnected before this tick and the heartbeat just
// succeeded (transparent reconnect), the native may have restarted with
// blank state (responsiblePeers empty). Evict it from StaticIndexers and
// re-request an assignment so the native re-tracks us properly and
// runOffloadLoop can eventually migrate us to real indexers.
if !wasConnected && isIndexerHB && conf.GetConfig().NativeIndexerAddresses != "" {
StreamNativeMu.RLock()
isNativeIndexer := false
for _, ad := range StaticNatives {
if ad.ID == ix.ID {
isNativeIndexer = true
break
}
}
StreamNativeMu.RUnlock()
if isNativeIndexer {
if mu != nil {
mu.Lock()
}
if ps[proto] != nil {
if s, ok := ps[proto][ix.ID]; ok {
if s.Stream != nil {
s.Stream.Close()
}
delete(ps[proto], ix.ID)
}
}
reconnectedAddr := ""
for addr, ad := range peers {
if ad.ID == ix.ID {
reconnectedAddr = addr
delete(peers, addr)
break
}
}
idxNeed := conf.GetConfig().MinIndexer - len(peers)
if mu != nil {
mu.Unlock()
}
if idxNeed < 1 {
idxNeed = 1
}
logger.Info().Str("addr", reconnectedAddr).Str("peer", ix.ID.String()).Msg(
"[native] native-as-indexer reconnected after restart — evicting and re-requesting assignment")
go replenishIndexersFromNative(h, idxNeed)
}
}
logger.Debug().Str("peer", ix.ID.String()).Str("proto", string(proto)).Msg("[native] step 2 — heartbeat sent ok")
}
}
}
for {
select {
case <-t.C:
hb := Heartbeat{
Name: name,
DID: peerID,
PeerID: h.ID().String(),
Timestamp: time.Now().UTC().Unix(),
doTick()
case <-indexerHeartbeatNudge:
if isIndexerHB {
logger.Info().Msg("[native] step 2 — nudge received, heartbeating new indexers immediately")
doTick()
}
for _, ix := range peers {
_ = sendHeartbeat(ctx, h, proto, ix, hb, ps, interval*time.Second)
case <-nativeHeartbeatNudge:
if isNativeHB {
logger.Info().Msg("[native] native nudge received, heartbeating replacement native immediately")
doTick()
}
case <-ctx.Done():
return
@@ -345,25 +751,73 @@ func SendHeartbeat(ctx context.Context, proto protocol.ID, name string, h host.H
}()
}
type ProtocolInfo struct {
PersistantStream bool
WaitResponse bool
TTL time.Duration
}
func TempStream(h host.Host, ad pp.AddrInfo, proto protocol.ID, did string, streams ProtocolStream, pts map[protocol.ID]*ProtocolInfo, mu *sync.RWMutex) (ProtocolStream, error) {
expiry := 2 * time.Second
if pts[proto] != nil {
expiry = pts[proto].TTL
}
ctxTTL, _ := context.WithTimeout(context.Background(), expiry)
if h.Network().Connectedness(ad.ID) != network.Connected {
if err := h.Connect(ctxTTL, ad); err != nil {
return streams, err
}
}
if streams[proto] != nil && streams[proto][ad.ID] != nil {
return streams, nil
} else if s, err := h.NewStream(ctxTTL, ad.ID, proto); err == nil {
mu.Lock()
if streams[proto] == nil {
streams[proto] = map[pp.ID]*Stream{}
}
mu.Unlock()
time.AfterFunc(expiry, func() {
mu.Lock()
delete(streams[proto], ad.ID)
mu.Unlock()
})
mu.Lock()
streams[proto][ad.ID] = &Stream{
DID: did,
Stream: s,
Expiry: time.Now().UTC().Add(expiry),
}
mu.Unlock()
return streams, nil
} else {
return streams, err
}
}
func sendHeartbeat(ctx context.Context, h host.Host, proto protocol.ID, p *pp.AddrInfo,
hb Heartbeat, ps ProtocolStream, interval time.Duration) error {
streams := ps.Get(proto)
if len(streams) == 0 {
return errors.New("no stream for protocol heartbeat founded")
logger := oclib.GetLogger()
if ps[proto] == nil {
ps[proto] = map[pp.ID]*Stream{}
}
streams := ps[proto]
pss, exists := streams[p.ID]
ctxTTL, _ := context.WithTimeout(ctx, 3*interval)
ctxTTL, cancel := context.WithTimeout(ctx, 3*interval)
defer cancel()
// Connect si nécessaire
if h.Network().Connectedness(p.ID) != network.Connected {
_ = h.Connect(ctxTTL, *p)
if err := h.Connect(ctxTTL, *p); err != nil {
logger.Err(err)
return err
}
exists = false // on devra recréer le stream
}
// Crée le stream si inexistant ou fermé
if !exists || pss.Stream == nil {
logger.Info().Msg("New Stream engaged as Heartbeat " + fmt.Sprintf("%v", proto) + " " + p.ID.String())
s, err := h.NewStream(ctx, p.ID, proto)
if err != nil {
logger.Err(err)
return err
}
pss = &Stream{
@@ -384,18 +838,3 @@ func sendHeartbeat(ctx context.Context, h host.Host, proto protocol.ID, p *pp.Ad
pss.Expiry = time.Now().UTC().Add(2 * time.Minute)
return nil
}
/*
func SearchPeer(search string) ([]*peer.Peer, error) {
ps := []*peer.Peer{}
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
peers := access.Search(nil, search, false)
if len(peers.Data) == 0 {
return ps, errors.New("no self available")
}
for _, p := range peers.Data {
ps = append(ps, p.(*peer.Peer))
}
return ps, nil
}
*/

View File

@@ -2,12 +2,8 @@ package common
import (
"bytes"
"crypto/ed25519"
"crypto/x509"
"encoding/base64"
"encoding/pem"
"errors"
"fmt"
"oc-discovery/conf"
"oc-discovery/models"
"os"
@@ -47,45 +43,6 @@ func Verify(pub crypto.PubKey, data, sig []byte) (bool, error) {
return pub.Verify(data, sig)
}
func LoadKeyFromFilePrivate() (crypto.PrivKey, error) {
path := conf.GetConfig().PrivateKeyPath
data, err := os.ReadFile(path)
if err != nil {
return nil, err
}
block, _ := pem.Decode(data)
keyAny, err := x509.ParsePKCS8PrivateKey(block.Bytes)
if err != nil {
return nil, err
}
edKey, ok := keyAny.(ed25519.PrivateKey)
if !ok {
return nil, fmt.Errorf("not an ed25519 key")
}
return crypto.UnmarshalEd25519PrivateKey(edKey)
}
func LoadKeyFromFilePublic() (crypto.PubKey, error) {
path := conf.GetConfig().PublicKeyPath
data, err := os.ReadFile(path)
if err != nil {
return nil, err
}
block, _ := pem.Decode(data)
keyAny, err := x509.ParsePKIXPublicKey(block.Bytes)
if err != nil {
return nil, err
}
edKey, ok := keyAny.(ed25519.PublicKey)
if !ok {
return nil, fmt.Errorf("not an ed25519 key")
}
// Try to unmarshal as libp2p private key (supports ed25519, rsa, etc.)
return crypto.UnmarshalEd25519PublicKey(edKey)
}
func LoadPSKFromFile() (pnet.PSK, error) {
path := conf.GetConfig().PSKPath
data, err := os.ReadFile(path)

View File

@@ -6,6 +6,10 @@ import (
"cloud.o-forge.io/core/oc-lib/models/peer"
)
type HeartBeatStreamed interface {
GetUptimeTracker() *UptimeTracker
}
type DiscoveryPeer interface {
GetPeerRecord(ctx context.Context, key string) ([]*peer.Peer, error)
}

View File

@@ -0,0 +1,777 @@
package common
import (
"context"
"encoding/json"
"errors"
"math/rand"
"oc-discovery/conf"
"strings"
"sync"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"github.com/libp2p/go-libp2p/core/host"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
)
const (
ProtocolNativeSubscription = "/opencloud/native/subscribe/1.0"
ProtocolNativeGetIndexers = "/opencloud/native/indexers/1.0"
// ProtocolNativeConsensus is used by nodes/indexers to cross-validate an indexer
// pool against all configured native peers.
ProtocolNativeConsensus = "/opencloud/native/consensus/1.0"
RecommendedHeartbeatInterval = 60 * time.Second
// TopicIndexerRegistry is the PubSub topic used by native indexers to gossip
// newly registered indexer PeerIDs to neighbouring natives.
TopicIndexerRegistry = "oc-indexer-registry"
// consensusQueryTimeout is the per-native timeout for a consensus query.
consensusQueryTimeout = 3 * time.Second
// consensusCollectTimeout is the total wait for all native responses.
consensusCollectTimeout = 4 * time.Second
)
// ConsensusRequest is sent by a node/indexer to a native to validate a candidate
// indexer list. The native replies with what it trusts and what it suggests instead.
type ConsensusRequest struct {
Candidates []string `json:"candidates"`
}
// ConsensusResponse is returned by a native during a consensus challenge.
// Trusted = candidates the native considers alive.
// Suggestions = extras the native knows and trusts but that were not in the candidate list.
type ConsensusResponse struct {
Trusted []string `json:"trusted"`
Suggestions []string `json:"suggestions,omitempty"`
}
// IndexerRegistration is sent by an indexer to a native to signal its alive state.
// Only Addr is required; PeerID is derived from it if omitted.
type IndexerRegistration struct {
PeerID string `json:"peer_id,omitempty"`
Addr string `json:"addr"`
}
// GetIndexersRequest asks a native for a pool of live indexers.
type GetIndexersRequest struct {
Count int `json:"count"`
From string `json:"from"`
}
// GetIndexersResponse is returned by the native with live indexer multiaddrs.
type GetIndexersResponse struct {
Indexers []string `json:"indexers"`
IsSelfFallback bool `json:"is_self_fallback,omitempty"`
}
var StaticNatives = map[string]*pp.AddrInfo{}
var StreamNativeMu sync.RWMutex
var StreamNatives ProtocolStream = ProtocolStream{}
// nativeHeartbeatOnce ensures we start exactly one long-lived heartbeat goroutine
// toward the native mesh, even when ConnectToNatives is called from recovery paths.
var nativeHeartbeatOnce sync.Once
// nativeMeshHeartbeatOnce guards the native-to-native heartbeat goroutine started
// by EnsureNativePeers so only one goroutine covers the whole StaticNatives map.
var nativeMeshHeartbeatOnce sync.Once
// ConnectToNatives is the initial setup for nodes/indexers in native mode:
// 1. Parses native addresses → StaticNatives.
// 2. Starts a single long-lived heartbeat goroutine toward the native mesh.
// 3. Fetches an initial indexer pool from the first responsive native.
// 4. Runs consensus when real (non-fallback) indexers are returned.
// 5. Replaces StaticIndexers with the confirmed pool.
func ConnectToNatives(h host.Host, minIndexer int, maxIndexer int, myPID pp.ID) error {
logger := oclib.GetLogger()
logger.Info().Msg("[native] step 1 — parsing native addresses")
// Parse native addresses — safe to call multiple times.
StreamNativeMu.Lock()
orderedAddrs := []string{}
for _, addr := range strings.Split(conf.GetConfig().NativeIndexerAddresses, ",") {
addr = strings.TrimSpace(addr)
if addr == "" {
continue
}
ad, err := pp.AddrInfoFromString(addr)
if err != nil {
logger.Err(err).Msg("[native] step 1 — invalid native addr")
continue
}
StaticNatives[addr] = ad
orderedAddrs = append(orderedAddrs, addr)
logger.Info().Str("addr", addr).Msg("[native] step 1 — native registered")
}
if len(StaticNatives) == 0 {
StreamNativeMu.Unlock()
return errors.New("no valid native addresses configured")
}
StreamNativeMu.Unlock()
logger.Info().Int("count", len(orderedAddrs)).Msg("[native] step 1 — natives parsed")
// Step 1: one long-lived heartbeat to each native.
nativeHeartbeatOnce.Do(func() {
logger.Info().Msg("[native] step 1 — starting long-lived heartbeat to native mesh")
SendHeartbeat(context.Background(), ProtocolHeartbeat,
conf.GetConfig().Name, h, StreamNatives, StaticNatives, &StreamNativeMu, 20*time.Second)
})
// Fetch initial pool from the first responsive native.
logger.Info().Int("want", maxIndexer).Msg("[native] step 1 — fetching indexer pool from native")
candidates, isFallback := fetchIndexersFromNative(h, orderedAddrs, maxIndexer)
if len(candidates) == 0 {
logger.Warn().Msg("[native] step 1 — no candidates returned by any native")
if minIndexer > 0 {
return errors.New("ConnectToNatives: no indexers available from any native")
}
return nil
}
logger.Info().Int("candidates", len(candidates)).Bool("fallback", isFallback).Msg("[native] step 1 — pool received")
// Step 2: populate StaticIndexers — consensus for real indexers, direct for fallback.
pool := resolvePool(h, candidates, isFallback, maxIndexer)
replaceStaticIndexers(pool)
StreamMuIndexes.RLock()
indexerCount := len(StaticIndexers)
StreamMuIndexes.RUnlock()
logger.Info().Int("pool_size", indexerCount).Msg("[native] step 2 — StaticIndexers replaced")
if minIndexer > 0 && indexerCount < minIndexer {
return errors.New("not enough majority-confirmed indexers available")
}
return nil
}
// replenishIndexersFromNative is called when an indexer heartbeat fails (step 3→4).
// It asks the native for exactly `need` replacement indexers, runs consensus when
// real indexers are returned, and adds the results to StaticIndexers without
// clearing the existing pool.
func replenishIndexersFromNative(h host.Host, need int) {
if need <= 0 {
return
}
logger := oclib.GetLogger()
logger.Info().Int("need", need).Msg("[native] step 4 — replenishing indexer pool from native")
StreamNativeMu.RLock()
addrs := make([]string, 0, len(StaticNatives))
for addr := range StaticNatives {
addrs = append(addrs, addr)
}
StreamNativeMu.RUnlock()
candidates, isFallback := fetchIndexersFromNative(h, addrs, need)
if len(candidates) == 0 {
logger.Warn().Msg("[native] step 4 — no candidates returned by any native")
return
}
logger.Info().Int("candidates", len(candidates)).Bool("fallback", isFallback).Msg("[native] step 4 — candidates received")
pool := resolvePool(h, candidates, isFallback, need)
if len(pool) == 0 {
logger.Warn().Msg("[native] step 4 — consensus yielded no confirmed indexers")
return
}
// Add new indexers to the pool — do NOT clear existing ones.
StreamMuIndexes.Lock()
for addr, ad := range pool {
StaticIndexers[addr] = ad
}
total := len(StaticIndexers)
StreamMuIndexes.Unlock()
logger.Info().Int("added", len(pool)).Int("total", total).Msg("[native] step 4 — pool replenished")
// Nudge the heartbeat goroutine to connect immediately instead of waiting
// for the next 20s tick.
NudgeIndexerHeartbeat()
logger.Info().Msg("[native] step 4 — heartbeat goroutine nudged")
}
// fetchIndexersFromNative opens a ProtocolNativeGetIndexers stream to the first
// responsive native and returns the candidate list and fallback flag.
func fetchIndexersFromNative(h host.Host, nativeAddrs []string, count int) (candidates []string, isFallback bool) {
logger := oclib.GetLogger()
for _, addr := range nativeAddrs {
ad, err := pp.AddrInfoFromString(addr)
if err != nil {
logger.Warn().Str("addr", addr).Msg("[native] fetch — skipping invalid addr")
continue
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
if err := h.Connect(ctx, *ad); err != nil {
cancel()
logger.Warn().Str("addr", addr).Err(err).Msg("[native] fetch — connect failed")
continue
}
s, err := h.NewStream(ctx, ad.ID, ProtocolNativeGetIndexers)
cancel()
if err != nil {
logger.Warn().Str("addr", addr).Err(err).Msg("[native] fetch — stream open failed")
continue
}
req := GetIndexersRequest{Count: count, From: h.ID().String()}
if encErr := json.NewEncoder(s).Encode(req); encErr != nil {
s.Close()
logger.Warn().Str("addr", addr).Err(encErr).Msg("[native] fetch — encode request failed")
continue
}
var resp GetIndexersResponse
if decErr := json.NewDecoder(s).Decode(&resp); decErr != nil {
s.Close()
logger.Warn().Str("addr", addr).Err(decErr).Msg("[native] fetch — decode response failed")
continue
}
s.Close()
logger.Info().Str("native", addr).Int("indexers", len(resp.Indexers)).Bool("fallback", resp.IsSelfFallback).Msg("[native] fetch — response received")
return resp.Indexers, resp.IsSelfFallback
}
logger.Warn().Msg("[native] fetch — no native responded")
return nil, false
}
// resolvePool converts a candidate list to a validated addr→AddrInfo map.
// When isFallback is true the native itself is the indexer — no consensus needed.
// When isFallback is false, consensus is run before accepting the candidates.
func resolvePool(h host.Host, candidates []string, isFallback bool, maxIndexer int) map[string]*pp.AddrInfo {
logger := oclib.GetLogger()
if isFallback {
logger.Info().Strs("addrs", candidates).Msg("[native] resolve — fallback mode, skipping consensus")
pool := make(map[string]*pp.AddrInfo, len(candidates))
for _, addr := range candidates {
ad, err := pp.AddrInfoFromString(addr)
if err != nil {
continue
}
pool[addr] = ad
}
return pool
}
// Round 1.
logger.Info().Int("candidates", len(candidates)).Msg("[native] resolve — consensus round 1")
confirmed, suggestions := clientSideConsensus(h, candidates)
logger.Info().Int("confirmed", len(confirmed)).Int("suggestions", len(suggestions)).Msg("[native] resolve — consensus round 1 done")
// Round 2: fill gaps from suggestions if below target.
if len(confirmed) < maxIndexer && len(suggestions) > 0 {
rand.Shuffle(len(suggestions), func(i, j int) { suggestions[i], suggestions[j] = suggestions[j], suggestions[i] })
gap := maxIndexer - len(confirmed)
if gap > len(suggestions) {
gap = len(suggestions)
}
logger.Info().Int("gap", gap).Msg("[native] resolve — consensus round 2 (filling gaps)")
confirmed2, _ := clientSideConsensus(h, append(confirmed, suggestions[:gap]...))
if len(confirmed2) > 0 {
confirmed = confirmed2
}
logger.Info().Int("confirmed", len(confirmed)).Msg("[native] resolve — consensus round 2 done")
}
pool := make(map[string]*pp.AddrInfo, len(confirmed))
for _, addr := range confirmed {
ad, err := pp.AddrInfoFromString(addr)
if err != nil {
continue
}
pool[addr] = ad
}
logger.Info().Int("pool_size", len(pool)).Msg("[native] resolve — pool ready")
return pool
}
// replaceStaticIndexers atomically replaces the active indexer pool.
// Peers no longer in next have their heartbeat streams closed so the SendHeartbeat
// goroutine stops sending to them on the next tick.
func replaceStaticIndexers(next map[string]*pp.AddrInfo) {
StreamMuIndexes.Lock()
defer StreamMuIndexes.Unlock()
for addr, ad := range next {
StaticIndexers[addr] = ad
}
}
// clientSideConsensus challenges a candidate list to ALL configured native peers
// in parallel. Each native replies with the candidates it trusts plus extras it
// recommends. An indexer is confirmed when strictly more than 50% of responding
// natives trust it.
func clientSideConsensus(h host.Host, candidates []string) (confirmed []string, suggestions []string) {
if len(candidates) == 0 {
return nil, nil
}
StreamNativeMu.RLock()
peers := make([]*pp.AddrInfo, 0, len(StaticNatives))
for _, ad := range StaticNatives {
peers = append(peers, ad)
}
StreamNativeMu.RUnlock()
if len(peers) == 0 {
return candidates, nil
}
type nativeResult struct {
trusted []string
suggestions []string
responded bool
}
ch := make(chan nativeResult, len(peers))
for _, ad := range peers {
go func(ad *pp.AddrInfo) {
ctx, cancel := context.WithTimeout(context.Background(), consensusQueryTimeout)
defer cancel()
if err := h.Connect(ctx, *ad); err != nil {
ch <- nativeResult{}
return
}
s, err := h.NewStream(ctx, ad.ID, ProtocolNativeConsensus)
if err != nil {
ch <- nativeResult{}
return
}
defer s.Close()
if err := json.NewEncoder(s).Encode(ConsensusRequest{Candidates: candidates}); err != nil {
ch <- nativeResult{}
return
}
var resp ConsensusResponse
if err := json.NewDecoder(s).Decode(&resp); err != nil {
ch <- nativeResult{}
return
}
ch <- nativeResult{trusted: resp.Trusted, suggestions: resp.Suggestions, responded: true}
}(ad)
}
timer := time.NewTimer(consensusCollectTimeout)
defer timer.Stop()
trustedCounts := map[string]int{}
suggestionPool := map[string]struct{}{}
total := 0
collected := 0
collect:
for collected < len(peers) {
select {
case r := <-ch:
collected++
if !r.responded {
continue
}
total++
seen := map[string]struct{}{}
for _, addr := range r.trusted {
if _, already := seen[addr]; !already {
trustedCounts[addr]++
seen[addr] = struct{}{}
}
}
for _, addr := range r.suggestions {
suggestionPool[addr] = struct{}{}
}
case <-timer.C:
break collect
}
}
if total == 0 {
return candidates, nil
}
confirmedSet := map[string]struct{}{}
for addr, count := range trustedCounts {
if count*2 > total {
confirmed = append(confirmed, addr)
confirmedSet[addr] = struct{}{}
}
}
for addr := range suggestionPool {
if _, ok := confirmedSet[addr]; !ok {
suggestions = append(suggestions, addr)
}
}
return
}
// RegisterWithNative sends a one-shot registration to each configured native indexer.
// Should be called periodically every RecommendedHeartbeatInterval.
func RegisterWithNative(h host.Host, nativeAddressesStr string) {
logger := oclib.GetLogger()
myAddr := ""
if !strings.Contains(h.Addrs()[len(h.Addrs())-1].String(), "127.0.0.1") {
myAddr = h.Addrs()[len(h.Addrs())-1].String() + "/p2p/" + h.ID().String()
}
if myAddr == "" {
logger.Warn().Msg("RegisterWithNative: no routable address yet, skipping")
return
}
reg := IndexerRegistration{
PeerID: h.ID().String(),
Addr: myAddr,
}
for _, addr := range strings.Split(nativeAddressesStr, ",") {
addr = strings.TrimSpace(addr)
if addr == "" {
continue
}
ad, err := pp.AddrInfoFromString(addr)
if err != nil {
logger.Err(err).Msg("RegisterWithNative: invalid addr")
continue
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
if err := h.Connect(ctx, *ad); err != nil {
cancel()
continue
}
s, err := h.NewStream(ctx, ad.ID, ProtocolNativeSubscription)
cancel()
if err != nil {
logger.Err(err).Msg("RegisterWithNative: stream open failed")
continue
}
if err := json.NewEncoder(s).Encode(reg); err != nil {
logger.Err(err).Msg("RegisterWithNative: encode failed")
}
s.Close()
}
}
// EnsureNativePeers populates StaticNatives from config and starts a single
// heartbeat goroutine toward the native mesh. Safe to call multiple times;
// the heartbeat goroutine is started at most once (nativeMeshHeartbeatOnce).
func EnsureNativePeers(h host.Host) {
logger := oclib.GetLogger()
nativeAddrs := conf.GetConfig().NativeIndexerAddresses
if nativeAddrs == "" {
return
}
StreamNativeMu.Lock()
for _, addr := range strings.Split(nativeAddrs, ",") {
addr = strings.TrimSpace(addr)
if addr == "" {
continue
}
ad, err := pp.AddrInfoFromString(addr)
if err != nil {
continue
}
StaticNatives[addr] = ad
logger.Info().Str("addr", addr).Msg("native: registered peer in native mesh")
}
StreamNativeMu.Unlock()
// One heartbeat goroutine iterates over all of StaticNatives on each tick;
// starting one per address would multiply heartbeats by the native count.
nativeMeshHeartbeatOnce.Do(func() {
logger.Info().Msg("native: starting mesh heartbeat goroutine")
SendHeartbeat(context.Background(), ProtocolHeartbeat,
conf.GetConfig().Name, h, StreamNatives, StaticNatives, &StreamNativeMu, 20*time.Second)
})
}
func StartNativeRegistration(h host.Host, nativeAddressesStr string) {
go func() {
// Poll until a routable (non-loopback) address is available before the first
// registration attempt. libp2p may not have discovered external addresses yet
// at startup. Cap at 12 retries (~1 minute) so we don't spin indefinitely.
for i := 0; i < 12; i++ {
hasRoutable := false
if !strings.Contains(h.Addrs()[len(h.Addrs())-1].String(), "127.0.0.1") {
hasRoutable = true
break
}
if hasRoutable {
break
}
time.Sleep(5 * time.Second)
}
RegisterWithNative(h, nativeAddressesStr)
t := time.NewTicker(RecommendedHeartbeatInterval)
defer t.Stop()
for range t.C {
RegisterWithNative(h, nativeAddressesStr)
}
}()
}
// ── Lost-native replacement ───────────────────────────────────────────────────
const (
// ProtocolNativeGetPeers lets a node/indexer ask a native for a random
// selection of that native's own native contacts (to replace a dead native).
ProtocolNativeGetPeers = "/opencloud/native/peers/1.0"
// ProtocolIndexerGetNatives lets nodes/indexers ask a connected indexer for
// its configured native addresses (fallback when no alive native responds).
ProtocolIndexerGetNatives = "/opencloud/indexer/natives/1.0"
// retryNativeInterval is how often retryLostNative polls a dead native.
retryNativeInterval = 30 * time.Second
)
// GetNativePeersRequest is sent to a native to ask for its known native contacts.
type GetNativePeersRequest struct {
Exclude []string `json:"exclude"`
Count int `json:"count"`
}
// GetNativePeersResponse carries native addresses returned by a native's peer list.
type GetNativePeersResponse struct {
Peers []string `json:"peers"`
}
// GetIndexerNativesRequest is sent to an indexer to ask for its configured native addresses.
type GetIndexerNativesRequest struct {
Exclude []string `json:"exclude"`
}
// GetIndexerNativesResponse carries native addresses returned by an indexer.
type GetIndexerNativesResponse struct {
Natives []string `json:"natives"`
}
// nativeHeartbeatNudge allows replenishNativesFromPeers to trigger an immediate
// native heartbeat tick after adding a replacement native to the pool.
var nativeHeartbeatNudge = make(chan struct{}, 1)
// NudgeNativeHeartbeat signals the native heartbeat goroutine to fire immediately.
func NudgeNativeHeartbeat() {
select {
case nativeHeartbeatNudge <- struct{}{}:
default: // nudge already pending, skip
}
}
// replenishIndexersIfNeeded checks if the indexer pool is below the configured
// minimum (or empty) and, if so, asks the native mesh for replacements.
// Called whenever a native is recovered so the indexer pool is restored.
func replenishIndexersIfNeeded(h host.Host) {
logger := oclib.GetLogger()
minIdx := conf.GetConfig().MinIndexer
if minIdx < 1 {
minIdx = 1
}
StreamMuIndexes.RLock()
indexerCount := len(StaticIndexers)
StreamMuIndexes.RUnlock()
if indexerCount < minIdx {
need := minIdx - indexerCount
logger.Info().Int("need", need).Int("current", indexerCount).Msg("[native] native recovered — replenishing indexer pool")
go replenishIndexersFromNative(h, need)
}
}
// replenishNativesFromPeers is called when the heartbeat to a native fails.
// Flow:
// 1. Ask other alive natives for one of their native contacts (ProtocolNativeGetPeers).
// 2. If none respond or return a new address, ask connected indexers (ProtocolIndexerGetNatives).
// 3. If no replacement found:
// - remaining > 1 → ignore (enough natives remain).
// - remaining ≤ 1 → start periodic retry (retryLostNative).
func replenishNativesFromPeers(h host.Host, lostAddr string, proto protocol.ID) {
if lostAddr == "" {
return
}
logger := oclib.GetLogger()
logger.Info().Str("lost", lostAddr).Msg("[native] replenish natives — start")
// Build exclude list: the lost addr + all currently alive natives.
// lostAddr has already been removed from StaticNatives by doTick.
StreamNativeMu.RLock()
remaining := len(StaticNatives)
exclude := make([]string, 0, remaining+1)
exclude = append(exclude, lostAddr)
for addr := range StaticNatives {
exclude = append(exclude, addr)
}
StreamNativeMu.RUnlock()
logger.Info().Int("remaining", remaining).Msg("[native] replenish natives — step 1: ask alive natives for a peer")
// Step 1: ask other alive natives for a replacement.
newAddr := fetchNativeFromNatives(h, exclude)
// Step 2: fallback — ask connected indexers for their native addresses.
if newAddr == "" {
logger.Info().Msg("[native] replenish natives — step 2: ask indexers for their native addresses")
newAddr = fetchNativeFromIndexers(h, exclude)
}
if newAddr != "" {
ad, err := pp.AddrInfoFromString(newAddr)
if err == nil {
StreamNativeMu.Lock()
StaticNatives[newAddr] = ad
StreamNativeMu.Unlock()
logger.Info().Str("new", newAddr).Msg("[native] replenish natives — replacement added, nudging heartbeat")
NudgeNativeHeartbeat()
replenishIndexersIfNeeded(h)
return
}
}
// Step 3: no replacement found.
logger.Warn().Int("remaining", remaining).Msg("[native] replenish natives — no replacement found")
if remaining > 1 {
logger.Info().Msg("[native] replenish natives — enough natives remain, ignoring loss")
return
}
// Last (or only) native — retry periodically.
logger.Info().Str("addr", lostAddr).Msg("[native] replenish natives — last native lost, starting periodic retry")
go retryLostNative(h, lostAddr, proto)
}
// fetchNativeFromNatives asks each alive native for one of its own native contacts
// not in exclude. Returns the first new address found or "" if none.
func fetchNativeFromNatives(h host.Host, exclude []string) string {
logger := oclib.GetLogger()
excludeSet := make(map[string]struct{}, len(exclude))
for _, e := range exclude {
excludeSet[e] = struct{}{}
}
StreamNativeMu.RLock()
natives := make([]*pp.AddrInfo, 0, len(StaticNatives))
for _, ad := range StaticNatives {
natives = append(natives, ad)
}
StreamNativeMu.RUnlock()
rand.Shuffle(len(natives), func(i, j int) { natives[i], natives[j] = natives[j], natives[i] })
for _, ad := range natives {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
if err := h.Connect(ctx, *ad); err != nil {
cancel()
logger.Warn().Str("native", ad.ID.String()).Err(err).Msg("[native] fetch native peers — connect failed")
continue
}
s, err := h.NewStream(ctx, ad.ID, ProtocolNativeGetPeers)
cancel()
if err != nil {
logger.Warn().Str("native", ad.ID.String()).Err(err).Msg("[native] fetch native peers — stream failed")
continue
}
req := GetNativePeersRequest{Exclude: exclude, Count: 1}
if encErr := json.NewEncoder(s).Encode(req); encErr != nil {
s.Close()
continue
}
var resp GetNativePeersResponse
if decErr := json.NewDecoder(s).Decode(&resp); decErr != nil {
s.Close()
continue
}
s.Close()
for _, peer := range resp.Peers {
if _, excluded := excludeSet[peer]; !excluded && peer != "" {
logger.Info().Str("from", ad.ID.String()).Str("new", peer).Msg("[native] fetch native peers — got replacement")
return peer
}
}
logger.Debug().Str("native", ad.ID.String()).Msg("[native] fetch native peers — no new native from this peer")
}
return ""
}
// fetchNativeFromIndexers asks connected indexers for their configured native addresses,
// returning the first one not in exclude.
func fetchNativeFromIndexers(h host.Host, exclude []string) string {
logger := oclib.GetLogger()
excludeSet := make(map[string]struct{}, len(exclude))
for _, e := range exclude {
excludeSet[e] = struct{}{}
}
StreamMuIndexes.RLock()
indexers := make([]*pp.AddrInfo, 0, len(StaticIndexers))
for _, ad := range StaticIndexers {
indexers = append(indexers, ad)
}
StreamMuIndexes.RUnlock()
rand.Shuffle(len(indexers), func(i, j int) { indexers[i], indexers[j] = indexers[j], indexers[i] })
for _, ad := range indexers {
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
if err := h.Connect(ctx, *ad); err != nil {
cancel()
continue
}
s, err := h.NewStream(ctx, ad.ID, ProtocolIndexerGetNatives)
cancel()
if err != nil {
logger.Warn().Str("indexer", ad.ID.String()).Err(err).Msg("[native] fetch indexer natives — stream failed")
continue
}
req := GetIndexerNativesRequest{Exclude: exclude}
if encErr := json.NewEncoder(s).Encode(req); encErr != nil {
s.Close()
continue
}
var resp GetIndexerNativesResponse
if decErr := json.NewDecoder(s).Decode(&resp); decErr != nil {
s.Close()
continue
}
s.Close()
for _, nativeAddr := range resp.Natives {
if _, excluded := excludeSet[nativeAddr]; !excluded && nativeAddr != "" {
logger.Info().Str("indexer", ad.ID.String()).Str("native", nativeAddr).Msg("[native] fetch indexer natives — got native")
return nativeAddr
}
}
}
logger.Warn().Msg("[native] fetch indexer natives — no native found from indexers")
return ""
}
// retryLostNative periodically retries connecting to a lost native address until
// it becomes reachable again or was already restored by another path.
func retryLostNative(h host.Host, addr string, nativeProto protocol.ID) {
logger := oclib.GetLogger()
logger.Info().Str("addr", addr).Msg("[native] retry — periodic retry for lost native started")
t := time.NewTicker(retryNativeInterval)
defer t.Stop()
for range t.C {
StreamNativeMu.RLock()
_, alreadyRestored := StaticNatives[addr]
StreamNativeMu.RUnlock()
if alreadyRestored {
logger.Info().Str("addr", addr).Msg("[native] retry — native already restored, stopping retry")
return
}
ad, err := pp.AddrInfoFromString(addr)
if err != nil {
logger.Warn().Str("addr", addr).Msg("[native] retry — invalid addr, stopping retry")
return
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
err = h.Connect(ctx, *ad)
cancel()
if err != nil {
logger.Warn().Str("addr", addr).Msg("[native] retry — still unreachable")
continue
}
// Reachable again — add back to pool.
StreamNativeMu.Lock()
StaticNatives[addr] = ad
StreamNativeMu.Unlock()
logger.Info().Str("addr", addr).Msg("[native] retry — native reconnected and added back to pool")
NudgeNativeHeartbeat()
replenishIndexersIfNeeded(h)
if nativeProto == ProtocolNativeGetIndexers {
StartNativeRegistration(h, addr) // register back
}
return
}
}

View File

@@ -0,0 +1,39 @@
package common
import (
"context"
"fmt"
"net"
"time"
"github.com/libp2p/go-libp2p/core/host"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/multiformats/go-multiaddr"
)
func PeerIsAlive(h host.Host, ad pp.AddrInfo) bool {
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()
err := h.Connect(ctx, ad)
return err == nil
}
func ExtractIP(addr string) (net.IP, error) {
ma, err := multiaddr.NewMultiaddr(addr)
if err != nil {
return nil, err
}
ipStr, err := ma.ValueForProtocol(multiaddr.P_IP4)
if err != nil {
ipStr, err = ma.ValueForProtocol(multiaddr.P_IP6)
if err != nil {
return nil, err
}
}
ip := net.ParseIP(ipStr)
if ip == nil {
return nil, fmt.Errorf("invalid IP: %s", ipStr)
}
return ip, nil
}

View File

@@ -5,9 +5,9 @@ import (
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"oc-discovery/conf"
"oc-discovery/daemons/node/common"
"strings"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
@@ -19,56 +19,42 @@ import (
"github.com/libp2p/go-libp2p/core/peer"
)
type PeerRecord struct {
Name string `json:"name"`
DID string `json:"did"` // real PEER ID
PeerID string `json:"peer_id"`
PubKey []byte `json:"pub_key"`
APIUrl string `json:"api_url"`
StreamAddress string `json:"stream_address"`
NATSAddress string `json:"nats_address"`
WalletAddress string `json:"wallet_address"`
Signature []byte `json:"signature"`
ExpiryDate time.Time `json:"expiry_date"`
type PeerRecordPayload struct {
Name string `json:"name"`
DID string `json:"did"`
PubKey []byte `json:"pub_key"`
ExpiryDate time.Time `json:"expiry_date"`
}
TTL int `json:"ttl"` // max of hop diffusion
NoPub bool `json:"no_pub"`
type PeerRecord struct {
PeerRecordPayload
PeerID string `json:"peer_id"`
APIUrl string `json:"api_url"`
StreamAddress string `json:"stream_address"`
NATSAddress string `json:"nats_address"`
WalletAddress string `json:"wallet_address"`
Signature []byte `json:"signature"`
}
func (p *PeerRecord) Sign() error {
priv, err := common.LoadKeyFromFilePrivate()
priv, err := tools.LoadKeyFromFilePrivate()
if err != nil {
return err
}
dht := PeerRecord{
Name: p.Name,
DID: p.DID,
PubKey: p.PubKey,
ExpiryDate: p.ExpiryDate,
}
payload, _ := json.Marshal(dht)
payload, _ := json.Marshal(p.PeerRecordPayload)
b, err := common.Sign(priv, payload)
p.Signature = b
return err
}
func (p *PeerRecord) Verify() (crypto.PubKey, error) {
fmt.Println(p.PubKey)
pubKey, err := crypto.UnmarshalPublicKey(p.PubKey) // retrieve pub key in message
if err != nil {
fmt.Println("UnmarshalPublicKey")
return pubKey, err
}
dht := PeerRecord{
Name: p.Name,
DID: p.DID,
PubKey: p.PubKey,
ExpiryDate: p.ExpiryDate,
}
payload, _ := json.Marshal(dht)
payload, _ := json.Marshal(p.PeerRecordPayload)
if ok, _ := common.Verify(pubKey, payload, p.Signature); !ok { // verify minimal message was sign per pubKey
fmt.Println("Verify")
if ok, _ := pubKey.Verify(payload, p.Signature); !ok { // verify minimal message was sign per pubKey
return pubKey, errors.New("invalid signature")
}
return pubKey, nil
@@ -79,7 +65,6 @@ func (pr *PeerRecord) ExtractPeer(ourkey string, key string, pubKey crypto.PubKe
if err != nil {
return false, nil, err
}
fmt.Println("ExtractPeer MarshalPublicKey")
rel := pp.NONE
if ourkey == key { // at this point is PeerID is same as our... we are... thats our peer INFO
rel = pp.SELF
@@ -90,7 +75,6 @@ func (pr *PeerRecord) ExtractPeer(ourkey string, key string, pubKey crypto.PubKe
UUID: pr.DID,
Name: pr.Name,
},
State: pp.ONLINE,
Relation: rel, // VERIFY.... it crush nothing
PeerID: pr.PeerID,
PublicKey: base64.StdEncoding.EncodeToString(pubBytes),
@@ -99,27 +83,30 @@ func (pr *PeerRecord) ExtractPeer(ourkey string, key string, pubKey crypto.PubKe
NATSAddress: pr.NATSAddress,
WalletAddress: pr.WalletAddress,
}
if time.Now().UTC().After(pr.ExpiryDate) { // is expired
p.State = pp.OFFLINE // then is considers OFFLINE
}
b, err := json.Marshal(p)
if err != nil {
return pp.SELF == p.Relation, nil, err
}
go tools.NewNATSCaller().SetNATSPub(tools.CREATE_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.PEER,
Method: int(tools.CREATE_PEER),
Payload: b,
})
if p.State == pp.OFFLINE {
if time.Now().UTC().After(pr.ExpiryDate) {
return pp.SELF == p.Relation, nil, errors.New("peer " + key + " is offline")
}
go tools.NewNATSCaller().SetNATSPub(tools.CREATE_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.PEER,
Method: int(tools.CREATE_RESOURCE),
SearchAttr: "peer_id",
Payload: b,
})
return pp.SELF == p.Relation, p, nil
}
type GetValue struct {
Key string `json:"key"`
Key string `json:"key"`
PeerID peer.ID `json:"peer_id"`
Name string `json:"name,omitempty"`
Search bool `json:"search,omitempty"`
}
type GetResponse struct {
@@ -127,162 +114,237 @@ type GetResponse struct {
Records map[string]PeerRecord `json:"records,omitempty"`
}
func (ix *IndexerService) genKey(did string) string {
return "/node/" + did
}
func (ix *IndexerService) genNameKey(name string) string {
return "/name/" + name
}
func (ix *IndexerService) genPIDKey(peerID string) string {
return "/pid/" + peerID
}
func (ix *IndexerService) initNodeHandler() {
ix.Host.SetStreamHandler(common.ProtocolHeartbeat, ix.HandleNodeHeartbeat)
logger := oclib.GetLogger()
logger.Info().Msg("Init Node Handler")
// Each heartbeat from a node carries a freshly signed PeerRecord.
// Republish it to the DHT so the record never expires as long as the node
// is alive — no separate publish stream needed from the node side.
ix.AfterHeartbeat = func(pid peer.ID) {
ctx1, cancel1 := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel1()
res, err := ix.DHT.GetValue(ctx1, ix.genPIDKey(pid.String()))
if err != nil {
logger.Warn().Err(err)
return
}
did := string(res)
ctx2, cancel2 := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel2()
res, err = ix.DHT.GetValue(ctx2, ix.genKey(did))
if err != nil {
logger.Warn().Err(err)
return
}
var rec PeerRecord
if err := json.Unmarshal(res, &rec); err != nil {
logger.Warn().Err(err).Str("peer", pid.String()).Msg("indexer: heartbeat record unmarshal failed")
return
}
if _, err := rec.Verify(); err != nil {
logger.Warn().Err(err).Str("peer", pid.String()).Msg("indexer: heartbeat record signature invalid")
return
}
data, err := json.Marshal(rec)
if err != nil {
return
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
logger.Info().Msg("REFRESH PutValue " + ix.genKey(rec.DID))
if err := ix.DHT.PutValue(ctx, ix.genKey(rec.DID), data); err != nil {
logger.Warn().Err(err).Str("did", rec.DID).Msg("indexer: DHT refresh failed")
return
}
if rec.Name != "" {
ctx2, cancel2 := context.WithTimeout(context.Background(), 10*time.Second)
ix.DHT.PutValue(ctx2, ix.genNameKey(rec.Name), []byte(rec.DID))
cancel2()
}
if rec.PeerID != "" {
ctx3, cancel3 := context.WithTimeout(context.Background(), 10*time.Second)
ix.DHT.PutValue(ctx3, ix.genPIDKey(rec.PeerID), []byte(rec.DID))
cancel3()
}
}
ix.Host.SetStreamHandler(common.ProtocolHeartbeat, ix.HandleHeartbeat)
ix.Host.SetStreamHandler(common.ProtocolPublish, ix.handleNodePublish)
ix.Host.SetStreamHandler(common.ProtocolGet, ix.handleNodeGet)
ix.Host.SetStreamHandler(common.ProtocolIndexerGetNatives, ix.handleGetNatives)
}
func (ix *IndexerService) handleNodePublish(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
for {
var rec PeerRecord
if err := json.NewDecoder(s).Decode(&rec); err != nil {
continue
}
rec2 := PeerRecord{
Name: rec.Name,
DID: rec.DID, // REAL PEER ID
PubKey: rec.PubKey,
PeerID: rec.PeerID,
}
if _, err := rec2.Verify(); err != nil {
logger.Err(err)
continue
}
if rec.PeerID == "" || rec.ExpiryDate.Before(time.Now().UTC()) { // already expired
logger.Warn().Msg(rec.PeerID + " is expired.")
continue
}
pid, err := peer.Decode(rec.PeerID)
if err != nil {
continue
}
ix.StreamMU.Lock()
var rec PeerRecord
if err := json.NewDecoder(s).Decode(&rec); err != nil {
logger.Err(err)
return
}
if _, err := rec.Verify(); err != nil {
logger.Err(err)
return
}
if rec.PeerID == "" || rec.ExpiryDate.Before(time.Now().UTC()) {
logger.Err(errors.New(rec.PeerID + " is expired."))
return
}
pid, err := peer.Decode(rec.PeerID)
if err != nil {
return
}
if ix.StreamRecords[common.ProtocolPublish] == nil {
ix.StreamRecords[common.ProtocolPublish] = map[peer.ID]*common.StreamRecord[PeerRecord]{}
}
streams := ix.StreamRecords[common.ProtocolPublish]
ix.StreamMU.Lock()
defer ix.StreamMU.Unlock()
if ix.StreamRecords[common.ProtocolHeartbeat] == nil {
ix.StreamRecords[common.ProtocolHeartbeat] = map[peer.ID]*common.StreamRecord[PeerRecord]{}
}
streams := ix.StreamRecords[common.ProtocolHeartbeat]
if srec, ok := streams[pid]; ok {
srec.DID = rec.DID
srec.Record = rec
srec.HeartbeatStream.UptimeTracker.LastSeen = time.Now().UTC()
}
if srec, ok := streams[pid]; ok {
fmt.Println("UPDATE PUBLISH", pid)
srec.DID = rec.DID
srec.Record = rec
srec.LastSeen = time.Now().UTC()
} else {
fmt.Println("CREATE PUBLISH", pid)
streams[pid] = &common.StreamRecord[PeerRecord]{ // HeartBeat wil
DID: rec.DID,
Record: rec,
LastSeen: time.Now().UTC(),
}
}
key := ix.genKey(rec.DID)
data, err := json.Marshal(rec)
if err != nil {
logger.Err(err)
return
}
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
if err := ix.DHT.PutValue(ctx, key, data); err != nil {
logger.Err(err)
cancel()
return
}
cancel()
if ix.LongLivedPubSubs[common.TopicPubSubNodeActivity] != nil && !rec.NoPub {
ad, err := peer.AddrInfoFromString("/ip4/" + conf.GetConfig().Hostname + " /tcp/" + fmt.Sprintf("%v", conf.GetConfig().NodeEndpointPort) + " /p2p/" + ix.Host.ID().String())
if err == nil {
if b, err := json.Marshal(common.TopicNodeActivityPub{
Disposer: *ad,
DID: rec.DID,
Name: rec.Name,
PeerID: pid.String(),
NodeActivity: pp.ONLINE,
}); err == nil {
ix.LongLivedPubSubs[common.TopicPubSubNodeActivity].Publish(context.Background(), b)
}
}
// Secondary index: /name/<name> → DID, so peers can resolve by human-readable name.
if rec.Name != "" {
ctx2, cancel2 := context.WithTimeout(context.Background(), 10*time.Second)
if err := ix.DHT.PutValue(ctx2, ix.genNameKey(rec.Name), []byte(rec.DID)); err != nil {
logger.Err(err).Str("name", rec.Name).Msg("indexer: failed to write name index")
}
if rec.TTL > 0 {
rec.NoPub = true
for _, ad := range common.StaticIndexers {
if ad.ID == s.Conn().RemotePeer() {
continue
}
if common.StreamIndexers[common.ProtocolPublish][ad.ID] == nil {
continue
}
stream := common.StreamIndexers[common.ProtocolPublish][ad.ID]
rec.TTL -= 1
if err := json.NewEncoder(stream.Stream).Encode(&rec); err != nil { // then publish on stream
continue
}
}
cancel2()
}
// Secondary index: /pid/<peerID> → DID, so peers can resolve by libp2p PeerID.
if rec.PeerID != "" {
ctx3, cancel3 := context.WithTimeout(context.Background(), 10*time.Second)
if err := ix.DHT.PutValue(ctx3, ix.genPIDKey(rec.PeerID), []byte(rec.DID)); err != nil {
logger.Err(err).Str("pid", rec.PeerID).Msg("indexer: failed to write pid index")
}
ix.StreamMU.Unlock()
cancel3()
}
}
func (ix *IndexerService) handleNodeGet(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
for {
var req GetValue
if err := json.NewDecoder(s).Decode(&req); err != nil {
logger.Err(err)
var req GetValue
if err := json.NewDecoder(s).Decode(&req); err != nil {
logger.Err(err)
return
}
resp := GetResponse{Found: false, Records: map[string]PeerRecord{}}
keys := []string{}
// Name substring search — scan in-memory connected nodes first, then DHT exact match.
if req.Name != "" {
if req.Search {
for _, did := range ix.LookupNameIndex(strings.ToLower(req.Name)) {
keys = append(keys, did)
}
} else {
// 2. DHT exact-name lookup: covers nodes that published but aren't currently connected.
nameCtx, nameCancel := context.WithTimeout(context.Background(), 5*time.Second)
if ch, err := ix.DHT.SearchValue(nameCtx, ix.genNameKey(req.Name)); err == nil {
for did := range ch {
keys = append(keys, string(did))
break
}
}
nameCancel()
}
} else if req.PeerID != "" {
pidCtx, pidCancel := context.WithTimeout(context.Background(), 5*time.Second)
if did, err := ix.DHT.GetValue(pidCtx, ix.genPIDKey(req.PeerID.String())); err == nil {
keys = append(keys, string(did))
}
pidCancel()
} else {
keys = append(keys, req.Key)
}
// DHT record fetch by DID key (covers exact-name and PeerID paths).
if len(keys) > 0 {
for _, k := range keys {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
c, err := ix.DHT.GetValue(ctx, ix.genKey(k))
cancel()
if err == nil {
var rec PeerRecord
if json.Unmarshal(c, &rec) == nil {
// Filter by PeerID only when one was explicitly specified.
if req.PeerID == "" || rec.PeerID == req.PeerID.String() {
resp.Records[rec.PeerID] = rec
}
}
} else if req.Name == "" && req.PeerID == "" {
logger.Err(err).Msg("Failed to fetch PeerRecord from DHT " + req.Key)
}
}
}
resp.Found = len(resp.Records) > 0
_ = json.NewEncoder(s).Encode(resp)
}
// handleGetNatives returns this indexer's configured native addresses,
// excluding any in the request's Exclude list.
func (ix *IndexerService) handleGetNatives(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
var req common.GetIndexerNativesRequest
if err := json.NewDecoder(s).Decode(&req); err != nil {
logger.Err(err).Msg("indexer get natives: decode")
return
}
excludeSet := make(map[string]struct{}, len(req.Exclude))
for _, e := range req.Exclude {
excludeSet[e] = struct{}{}
}
resp := common.GetIndexerNativesResponse{}
for _, addr := range strings.Split(conf.GetConfig().NativeIndexerAddresses, ",") {
addr = strings.TrimSpace(addr)
if addr == "" {
continue
}
ix.StreamMU.Lock()
if _, excluded := excludeSet[addr]; !excluded {
resp.Natives = append(resp.Natives, addr)
}
}
if ix.StreamRecords[common.ProtocolGet] == nil {
ix.StreamRecords[common.ProtocolGet] = map[peer.ID]*common.StreamRecord[PeerRecord]{}
}
resp := GetResponse{
Found: false,
Records: map[string]PeerRecord{},
}
streams := ix.StreamRecords[common.ProtocolPublish]
// simple lookup by PeerID (or DID)
for _, rec := range streams {
if rec.Record.DID == req.Key || rec.Record.PeerID == req.Key || rec.Record.Name == req.Key { // OK
resp.Found = true
resp.Records[rec.Record.PeerID] = rec.Record
if rec.Record.DID == req.Key || rec.Record.PeerID == req.Key { // there unique... no need to proceed more...
_ = json.NewEncoder(s).Encode(resp)
ix.StreamMU.Unlock()
return
}
continue
}
}
// if not found ask to my neighboor indexers
for pid, dsp := range ix.DisposedPeers {
if _, ok := resp.Records[dsp.PeerID]; !ok && (dsp.Name == req.Key || dsp.DID == req.Key || dsp.PeerID == req.Key) {
ctxTTL, err := context.WithTimeout(context.Background(), 120*time.Second)
if err != nil {
continue
}
if ix.Host.Network().Connectedness(pid) != network.Connected {
_ = ix.Host.Connect(ctxTTL, dsp.Disposer)
str, err := ix.Host.NewStream(ctxTTL, pid, common.ProtocolGet)
if err != nil {
continue
}
for {
if ctxTTL.Err() == context.DeadlineExceeded {
break
}
var subResp GetResponse
if err := json.NewDecoder(str).Decode(&resp); err != nil {
continue
}
if subResp.Found {
for k, v := range subResp.Records {
if _, ok := resp.Records[k]; !ok {
resp.Records[k] = v
}
}
break
}
}
}
}
}
// Not found
_ = json.NewEncoder(s).Encode(resp)
ix.StreamMU.Unlock()
if err := json.NewEncoder(s).Encode(resp); err != nil {
logger.Err(err).Msg("indexer get natives: encode response")
}
}

View File

@@ -0,0 +1,168 @@
package indexer
import (
"context"
"encoding/json"
"strings"
"sync"
"time"
"oc-discovery/daemons/node/common"
oclib "cloud.o-forge.io/core/oc-lib"
pubsub "github.com/libp2p/go-libp2p-pubsub"
pp "github.com/libp2p/go-libp2p/core/peer"
)
// TopicNameIndex is the GossipSub topic shared by regular indexers to exchange
// add/delete events for the distributed name→peerID mapping.
const TopicNameIndex = "oc-name-index"
// nameIndexDedupWindow suppresses re-emission of the same (action, name, peerID)
// tuple within this window, reducing duplicate events when a node is registered
// with multiple indexers simultaneously.
const nameIndexDedupWindow = 30 * time.Second
// NameIndexAction indicates whether a name mapping is being added or removed.
type NameIndexAction string
const (
NameIndexAdd NameIndexAction = "add"
NameIndexDelete NameIndexAction = "delete"
)
// NameIndexEvent is published on TopicNameIndex by each indexer when a node
// registers (add) or is evicted by the GC (delete).
type NameIndexEvent struct {
Action NameIndexAction `json:"action"`
Name string `json:"name"`
PeerID string `json:"peer_id"`
DID string `json:"did"`
}
// nameIndexState holds the local in-memory name index and the sender-side
// deduplication tracker.
type nameIndexState struct {
// index: name → peerID → DID, built from events received from all indexers.
index map[string]map[string]string
indexMu sync.RWMutex
// emitted tracks the last emission time for each (action, name, peerID) key
// to suppress duplicates within nameIndexDedupWindow.
emitted map[string]time.Time
emittedMu sync.Mutex
}
// shouldEmit returns true if the (action, name, peerID) tuple has not been
// emitted within nameIndexDedupWindow, updating the tracker if so.
func (s *nameIndexState) shouldEmit(action NameIndexAction, name, peerID string) bool {
key := string(action) + ":" + name + ":" + peerID
s.emittedMu.Lock()
defer s.emittedMu.Unlock()
if t, ok := s.emitted[key]; ok && time.Since(t) < nameIndexDedupWindow {
return false
}
s.emitted[key] = time.Now()
return true
}
// onEvent applies a received NameIndexEvent to the local index.
// "add" inserts/updates the mapping; "delete" removes it.
// Operations are idempotent — duplicate events from multiple indexers are harmless.
func (s *nameIndexState) onEvent(evt NameIndexEvent) {
if evt.Name == "" || evt.PeerID == "" {
return
}
s.indexMu.Lock()
defer s.indexMu.Unlock()
switch evt.Action {
case NameIndexAdd:
if s.index[evt.Name] == nil {
s.index[evt.Name] = map[string]string{}
}
s.index[evt.Name][evt.PeerID] = evt.DID
case NameIndexDelete:
if s.index[evt.Name] != nil {
delete(s.index[evt.Name], evt.PeerID)
if len(s.index[evt.Name]) == 0 {
delete(s.index, evt.Name)
}
}
}
}
// initNameIndex joins TopicNameIndex and starts consuming events.
// Must be called after ix.PS is ready.
func (ix *IndexerService) initNameIndex(ps *pubsub.PubSub) {
logger := oclib.GetLogger()
ix.nameIndex = &nameIndexState{
index: map[string]map[string]string{},
emitted: map[string]time.Time{},
}
ps.RegisterTopicValidator(TopicNameIndex, func(_ context.Context, _ pp.ID, _ *pubsub.Message) bool {
return true
})
topic, err := ps.Join(TopicNameIndex)
if err != nil {
logger.Err(err).Msg("name index: failed to join topic")
return
}
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.Lock()
ix.LongLivedStreamRecordedService.LongLivedPubSubService.LongLivedPubSubs[TopicNameIndex] = topic
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.Unlock()
common.SubscribeEvents(
ix.LongLivedStreamRecordedService.LongLivedPubSubService,
context.Background(),
TopicNameIndex,
-1,
func(_ context.Context, evt NameIndexEvent, _ string) {
ix.nameIndex.onEvent(evt)
},
)
}
// publishNameEvent emits a NameIndexEvent on TopicNameIndex, subject to the
// sender-side deduplication window.
func (ix *IndexerService) publishNameEvent(action NameIndexAction, name, peerID, did string) {
if ix.nameIndex == nil || name == "" || peerID == "" {
return
}
if !ix.nameIndex.shouldEmit(action, name, peerID) {
return
}
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.RLock()
topic := ix.LongLivedStreamRecordedService.LongLivedPubSubService.LongLivedPubSubs[TopicNameIndex]
ix.LongLivedStreamRecordedService.LongLivedPubSubService.PubsubMu.RUnlock()
if topic == nil {
return
}
evt := NameIndexEvent{Action: action, Name: name, PeerID: peerID, DID: did}
b, err := json.Marshal(evt)
if err != nil {
return
}
_ = topic.Publish(context.Background(), b)
}
// LookupNameIndex searches the distributed name index for peers whose name
// contains needle (case-insensitive). Returns peerID → DID for matched peers.
// Returns nil if the name index is not initialised (e.g. native indexers).
func (ix *IndexerService) LookupNameIndex(needle string) map[string]string {
if ix.nameIndex == nil {
return nil
}
result := map[string]string{}
needleLow := strings.ToLower(needle)
ix.nameIndex.indexMu.RLock()
defer ix.nameIndex.indexMu.RUnlock()
for name, peers := range ix.nameIndex.index {
if strings.Contains(strings.ToLower(name), needleLow) {
for peerID, did := range peers {
result[peerID] = did
}
}
}
return result
}

View File

@@ -0,0 +1,579 @@
package indexer
import (
"context"
"encoding/json"
"errors"
"fmt"
"math/rand"
"slices"
"strings"
"sync"
"time"
"oc-discovery/daemons/node/common"
oclib "cloud.o-forge.io/core/oc-lib"
pubsub "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/network"
pp "github.com/libp2p/go-libp2p/core/peer"
)
const (
// IndexerTTL is the lifetime of a live-indexer cache entry. Set to 50% above
// the recommended 60s heartbeat interval so a single delayed renewal does not
// evict a healthy indexer from the native's cache.
IndexerTTL = 90 * time.Second
// offloadInterval is how often the native checks if it can release responsible peers.
offloadInterval = 30 * time.Second
// dhtRefreshInterval is how often the background goroutine queries the DHT for
// known-but-expired indexer entries (written by neighbouring natives).
dhtRefreshInterval = 30 * time.Second
// maxFallbackPeers caps how many peers the native will accept in self-delegation
// mode. Beyond this limit the native refuses to act as a fallback indexer so it
// is not overwhelmed during prolonged indexer outages.
maxFallbackPeers = 50
)
// liveIndexerEntry tracks a registered indexer in the native's in-memory cache and DHT.
type liveIndexerEntry struct {
PeerID string `json:"peer_id"`
Addr string `json:"addr"`
ExpiresAt time.Time `json:"expires_at"`
}
// NativeState holds runtime state specific to native indexer operation.
type NativeState struct {
liveIndexers map[string]*liveIndexerEntry // keyed by PeerID, local cache with TTL
liveIndexersMu sync.RWMutex
responsiblePeers map[pp.ID]struct{} // peers for which the native is fallback indexer
responsibleMu sync.RWMutex
// knownPeerIDs accumulates all indexer PeerIDs ever seen (local stream or gossip).
// Used by refreshIndexersFromDHT to re-hydrate expired entries from the shared DHT,
// including entries written by other natives.
knownPeerIDs map[string]string
knownMu sync.RWMutex
}
func newNativeState() *NativeState {
return &NativeState{
liveIndexers: map[string]*liveIndexerEntry{},
responsiblePeers: map[pp.ID]struct{}{},
knownPeerIDs: map[string]string{},
}
}
// IndexerRecordValidator validates indexer DHT entries under the "indexer" namespace.
type IndexerRecordValidator struct{}
func (v IndexerRecordValidator) Validate(_ string, value []byte) error {
var e liveIndexerEntry
if err := json.Unmarshal(value, &e); err != nil {
return err
}
if e.Addr == "" {
return errors.New("missing addr")
}
if e.ExpiresAt.Before(time.Now().UTC()) {
return errors.New("expired indexer record")
}
return nil
}
func (v IndexerRecordValidator) Select(_ string, values [][]byte) (int, error) {
var newest time.Time
index := 0
for i, val := range values {
var e liveIndexerEntry
if err := json.Unmarshal(val, &e); err != nil {
continue
}
if e.ExpiresAt.After(newest) {
newest = e.ExpiresAt
index = i
}
}
return index, nil
}
// InitNative registers native-specific stream handlers and starts background loops.
// Must be called after DHT is initialized.
func (ix *IndexerService) InitNative() {
ix.Native = newNativeState()
ix.Host.SetStreamHandler(common.ProtocolHeartbeat, ix.HandleHeartbeat) // specific heartbeat for Indexer.
ix.Host.SetStreamHandler(common.ProtocolNativeSubscription, ix.handleNativeSubscription)
ix.Host.SetStreamHandler(common.ProtocolNativeGetIndexers, ix.handleNativeGetIndexers)
ix.Host.SetStreamHandler(common.ProtocolNativeConsensus, ix.handleNativeConsensus)
ix.Host.SetStreamHandler(common.ProtocolNativeGetPeers, ix.handleNativeGetPeers)
ix.Host.SetStreamHandler(common.ProtocolIndexerGetNatives, ix.handleGetNatives)
ix.subscribeIndexerRegistry()
// Ensure long connections to other configured natives (native-to-native mesh).
common.EnsureNativePeers(ix.Host)
go ix.runOffloadLoop()
go ix.refreshIndexersFromDHT()
}
// subscribeIndexerRegistry joins the PubSub topic used by natives to gossip newly
// registered indexer PeerIDs to one another, enabling cross-native DHT discovery.
func (ix *IndexerService) subscribeIndexerRegistry() {
logger := oclib.GetLogger()
ix.PS.RegisterTopicValidator(common.TopicIndexerRegistry, func(_ context.Context, _ pp.ID, msg *pubsub.Message) bool {
// Reject empty or syntactically invalid multiaddrs before they reach the
// message loop. A compromised native could otherwise gossip arbitrary data.
addr := string(msg.Data)
if addr == "" {
return false
}
_, err := pp.AddrInfoFromString(addr)
return err == nil
})
topic, err := ix.PS.Join(common.TopicIndexerRegistry)
if err != nil {
logger.Err(err).Msg("native: failed to join indexer registry topic")
return
}
sub, err := topic.Subscribe()
if err != nil {
logger.Err(err).Msg("native: failed to subscribe to indexer registry topic")
return
}
ix.PubsubMu.Lock()
ix.LongLivedPubSubs[common.TopicIndexerRegistry] = topic
ix.PubsubMu.Unlock()
go func() {
for {
msg, err := sub.Next(context.Background())
if err != nil {
return
}
addr := string(msg.Data)
if addr == "" {
continue
}
if peer, err := pp.AddrInfoFromString(addr); err == nil {
ix.Native.knownMu.Lock()
ix.Native.knownPeerIDs[peer.ID.String()] = addr
ix.Native.knownMu.Unlock()
}
// A neighbouring native registered this PeerID; add to known set for DHT refresh.
}
}()
}
// handleNativeSubscription stores an indexer's alive registration in the local cache
// immediately, then persists it to the DHT asynchronously.
// The stream is temporary: indexer sends one IndexerRegistration and closes.
func (ix *IndexerService) handleNativeSubscription(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
logger.Info().Msg("Subscription")
var reg common.IndexerRegistration
if err := json.NewDecoder(s).Decode(&reg); err != nil {
logger.Err(err).Msg("native subscription: decode")
return
}
logger.Info().Msg("Subscription " + reg.Addr)
if reg.Addr == "" {
logger.Error().Msg("native subscription: missing addr")
return
}
if reg.PeerID == "" {
ad, err := pp.AddrInfoFromString(reg.Addr)
if err != nil {
logger.Err(err).Msg("native subscription: invalid addr")
return
}
reg.PeerID = ad.ID.String()
}
// Build entry with a fresh TTL — must happen before the cache write so the 66s
// window is not consumed by DHT retries.
entry := &liveIndexerEntry{
PeerID: reg.PeerID,
Addr: reg.Addr,
ExpiresAt: time.Now().UTC().Add(IndexerTTL),
}
// Update local cache and known set immediately so concurrent GetIndexers calls
// can already see this indexer without waiting for the DHT write to complete.
ix.Native.liveIndexersMu.Lock()
_, isRenewal := ix.Native.liveIndexers[reg.PeerID]
ix.Native.liveIndexers[reg.PeerID] = entry
ix.Native.liveIndexersMu.Unlock()
ix.Native.knownMu.Lock()
ix.Native.knownPeerIDs[reg.PeerID] = reg.Addr
ix.Native.knownMu.Unlock()
// Gossip PeerID to neighbouring natives so they discover it via DHT.
ix.PubsubMu.RLock()
topic := ix.LongLivedPubSubs[common.TopicIndexerRegistry]
ix.PubsubMu.RUnlock()
if topic != nil {
if err := topic.Publish(context.Background(), []byte(reg.Addr)); err != nil {
logger.Err(err).Msg("native subscription: registry gossip publish")
}
}
if isRenewal {
logger.Debug().Str("peer", reg.PeerID).Msg("native: indexer TTL renewed : " + fmt.Sprintf("%v", len(ix.Native.liveIndexers)))
} else {
logger.Info().Str("peer", reg.PeerID).Msg("native: indexer registered : " + fmt.Sprintf("%v", len(ix.Native.liveIndexers)))
}
// Persist in DHT asynchronously — retries must not block the handler or consume
// the local cache TTL.
key := ix.genIndexerKey(reg.PeerID)
data, err := json.Marshal(entry)
if err != nil {
logger.Err(err).Msg("native subscription: marshal entry")
return
}
go func() {
for {
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
if err := ix.DHT.PutValue(ctx, key, data); err != nil {
cancel()
logger.Err(err).Msg("native subscription: DHT put " + key)
if strings.Contains(err.Error(), "failed to find any peer in table") {
time.Sleep(10 * time.Second)
continue
}
return
}
cancel()
return
}
}()
}
// handleNativeGetIndexers returns this native's own list of reachable indexers.
// Self-delegation (native acting as temporary fallback indexer) is only permitted
// for nodes — never for peers that are themselves registered indexers in knownPeerIDs.
// The consensus across natives is the responsibility of the requesting node/indexer.
func (ix *IndexerService) handleNativeGetIndexers(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
var req common.GetIndexersRequest
if err := json.NewDecoder(s).Decode(&req); err != nil {
logger.Err(err).Msg("native get indexers: decode")
return
}
if req.Count <= 0 {
req.Count = 3
}
callerPeerID := s.Conn().RemotePeer().String()
reachable := ix.reachableLiveIndexers(req.Count, callerPeerID)
var resp common.GetIndexersResponse
if len(reachable) == 0 {
// No live indexers reachable — try to self-delegate.
if ix.selfDelegate(s.Conn().RemotePeer(), &resp) {
logger.Info().Str("peer", callerPeerID).Msg("native: no indexers, acting as fallback for node")
} else {
// Fallback pool saturated: return empty so the caller retries another
// native instead of piling more load onto this one.
logger.Warn().Str("peer", callerPeerID).Int("pool", maxFallbackPeers).Msg(
"native: fallback pool saturated, refusing self-delegation")
}
} else {
rand.Shuffle(len(reachable), func(i, j int) { reachable[i], reachable[j] = reachable[j], reachable[i] })
if req.Count > len(reachable) {
req.Count = len(reachable)
}
resp.Indexers = reachable[:req.Count]
}
if err := json.NewEncoder(s).Encode(resp); err != nil {
logger.Err(err).Msg("native get indexers: encode response")
}
}
// handleNativeConsensus answers a consensus challenge from a node/indexer.
// It returns:
// - Trusted: which of the candidates it considers alive.
// - Suggestions: extras it knows and trusts that were not in the candidate list.
func (ix *IndexerService) handleNativeConsensus(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
var req common.ConsensusRequest
if err := json.NewDecoder(s).Decode(&req); err != nil {
logger.Err(err).Msg("native consensus: decode")
return
}
myList := ix.reachableLiveIndexers(-1, s.Conn().RemotePeer().String())
mySet := make(map[string]struct{}, len(myList))
for _, addr := range myList {
mySet[addr] = struct{}{}
}
trusted := []string{}
candidateSet := make(map[string]struct{}, len(req.Candidates))
for _, addr := range req.Candidates {
candidateSet[addr] = struct{}{}
if _, ok := mySet[addr]; ok {
trusted = append(trusted, addr) // candidate we also confirm as reachable
}
}
// Extras we trust but that the requester didn't include → suggestions.
suggestions := []string{}
for _, addr := range myList {
if _, inCandidates := candidateSet[addr]; !inCandidates {
suggestions = append(suggestions, addr)
}
}
resp := common.ConsensusResponse{Trusted: trusted, Suggestions: suggestions}
if err := json.NewEncoder(s).Encode(resp); err != nil {
logger.Err(err).Msg("native consensus: encode response")
}
}
// selfDelegate marks the caller as a responsible peer and exposes this native's own
// address as its temporary indexer. Returns false when the fallback pool is saturated
// (maxFallbackPeers reached) — the caller must return an empty response so the node
// retries later instead of pinning indefinitely to an overloaded native.
func (ix *IndexerService) selfDelegate(remotePeer pp.ID, resp *common.GetIndexersResponse) bool {
ix.Native.responsibleMu.Lock()
defer ix.Native.responsibleMu.Unlock()
if len(ix.Native.responsiblePeers) >= maxFallbackPeers {
return false
}
ix.Native.responsiblePeers[remotePeer] = struct{}{}
resp.IsSelfFallback = true
resp.Indexers = []string{ix.Host.Addrs()[len(ix.Host.Addrs())-1].String() + "/p2p/" + ix.Host.ID().String()}
return true
}
// reachableLiveIndexers returns the multiaddrs of non-expired, pingable indexers
// from the local cache (kept fresh by refreshIndexersFromDHT in background).
func (ix *IndexerService) reachableLiveIndexers(count int, from ...string) []string {
ix.Native.liveIndexersMu.RLock()
now := time.Now().UTC()
candidates := []*liveIndexerEntry{}
for _, e := range ix.Native.liveIndexers {
fmt.Println("liveIndexers", slices.Contains(from, e.PeerID), from, e.PeerID)
if e.ExpiresAt.After(now) && !slices.Contains(from, e.PeerID) {
candidates = append(candidates, e)
}
}
ix.Native.liveIndexersMu.RUnlock()
fmt.Println("midway...", candidates, from, ix.Native.knownPeerIDs)
if (count > 0 && len(candidates) < count) || count < 0 {
ix.Native.knownMu.RLock()
for k, v := range ix.Native.knownPeerIDs {
// Include peers whose liveIndexers entry is absent OR expired.
// A non-nil but expired entry means the peer was once known but
// has since timed out — PeerIsAlive below will decide if it's back.
fmt.Println("knownPeerIDs", slices.Contains(from, k), from, k)
if !slices.Contains(from, k) {
candidates = append(candidates, &liveIndexerEntry{
PeerID: k,
Addr: v,
})
}
}
ix.Native.knownMu.RUnlock()
}
fmt.Println("midway...1", candidates)
reachable := []string{}
for _, e := range candidates {
ad, err := pp.AddrInfoFromString(e.Addr)
if err != nil {
continue
}
if common.PeerIsAlive(ix.Host, *ad) {
reachable = append(reachable, e.Addr)
}
}
return reachable
}
// refreshIndexersFromDHT runs in background and queries the shared DHT for every known
// indexer PeerID whose local cache entry is missing or expired. This supplements the
// local cache with entries written by neighbouring natives.
func (ix *IndexerService) refreshIndexersFromDHT() {
t := time.NewTicker(dhtRefreshInterval)
defer t.Stop()
logger := oclib.GetLogger()
for range t.C {
ix.Native.knownMu.RLock()
peerIDs := make([]string, 0, len(ix.Native.knownPeerIDs))
for pid := range ix.Native.knownPeerIDs {
peerIDs = append(peerIDs, pid)
}
ix.Native.knownMu.RUnlock()
now := time.Now().UTC()
for _, pid := range peerIDs {
ix.Native.liveIndexersMu.RLock()
existing := ix.Native.liveIndexers[pid]
ix.Native.liveIndexersMu.RUnlock()
if existing != nil && existing.ExpiresAt.After(now) {
continue // still fresh in local cache
}
key := ix.genIndexerKey(pid)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
ch, err := ix.DHT.SearchValue(ctx, key)
if err != nil {
cancel()
continue
}
var best *liveIndexerEntry
for b := range ch {
var e liveIndexerEntry
if err := json.Unmarshal(b, &e); err != nil {
continue
}
if e.ExpiresAt.After(time.Now().UTC()) {
if best == nil || e.ExpiresAt.After(best.ExpiresAt) {
best = &e
}
}
}
cancel()
if best != nil {
ix.Native.liveIndexersMu.Lock()
ix.Native.liveIndexers[best.PeerID] = best
ix.Native.liveIndexersMu.Unlock()
logger.Info().Str("peer", best.PeerID).Msg("native: refreshed indexer from DHT")
} else {
// DHT has no fresh entry — peer is gone, prune from known set.
ix.Native.knownMu.Lock()
delete(ix.Native.knownPeerIDs, pid)
ix.Native.knownMu.Unlock()
logger.Info().Str("peer", pid).Msg("native: pruned stale peer from knownPeerIDs")
}
}
}
}
func (ix *IndexerService) genIndexerKey(peerID string) string {
return "/indexer/" + peerID
}
// runOffloadLoop periodically checks if real indexers are available and releases
// responsible peers so they can reconnect to actual indexers on their next attempt.
func (ix *IndexerService) runOffloadLoop() {
t := time.NewTicker(offloadInterval)
defer t.Stop()
logger := oclib.GetLogger()
for range t.C {
fmt.Println("runOffloadLoop", ix.Native.responsiblePeers)
ix.Native.responsibleMu.RLock()
count := len(ix.Native.responsiblePeers)
ix.Native.responsibleMu.RUnlock()
if count == 0 {
continue
}
ix.Native.responsibleMu.RLock()
peerIDS := []string{}
for p := range ix.Native.responsiblePeers {
peerIDS = append(peerIDS, p.String())
}
fmt.Println("COUNT --> ", count, len(ix.reachableLiveIndexers(-1, peerIDS...)))
ix.Native.responsibleMu.RUnlock()
if len(ix.reachableLiveIndexers(-1, peerIDS...)) > 0 {
ix.Native.responsibleMu.RLock()
released := ix.Native.responsiblePeers
ix.Native.responsibleMu.RUnlock()
// Reset (not Close) heartbeat streams of released peers.
// Close() only half-closes the native's write direction — the peer's write
// direction stays open and sendHeartbeat never sees an error.
// Reset() abruptly terminates both directions, making the peer's next
// json.Encode return an error which triggers replenishIndexersFromNative.
ix.StreamMU.Lock()
if streams := ix.StreamRecords[common.ProtocolHeartbeat]; streams != nil {
for pid := range released {
if rec, ok := streams[pid]; ok {
if rec.HeartbeatStream != nil && rec.HeartbeatStream.Stream != nil {
rec.HeartbeatStream.Stream.Reset()
}
ix.Native.responsibleMu.Lock()
delete(ix.Native.responsiblePeers, pid)
ix.Native.responsibleMu.Unlock()
delete(streams, pid)
logger.Info().Str("peer", pid.String()).Str("proto", string(common.ProtocolHeartbeat)).Msg(
"native: offload — stream reset, peer will reconnect to real indexer")
} else {
// No recorded heartbeat stream for this peer: either it never
// passed the score check (new peer, uptime=0 → score<75) or the
// stream was GC'd. We cannot send a Reset signal, so close the
// whole connection instead — this makes the peer's sendHeartbeat
// return an error, which triggers replenishIndexersFromNative and
// migrates it to a real indexer.
ix.Native.responsibleMu.Lock()
delete(ix.Native.responsiblePeers, pid)
ix.Native.responsibleMu.Unlock()
go ix.Host.Network().ClosePeer(pid)
logger.Info().Str("peer", pid.String()).Msg(
"native: offload — no heartbeat stream, closing connection so peer re-requests real indexers")
}
}
}
ix.StreamMU.Unlock()
logger.Info().Int("released", count).Msg("native: offloaded responsible peers to real indexers")
}
}
}
// handleNativeGetPeers returns a random selection of this native's known native
// contacts, excluding any in the request's Exclude list.
func (ix *IndexerService) handleNativeGetPeers(s network.Stream) {
defer s.Close()
logger := oclib.GetLogger()
var req common.GetNativePeersRequest
if err := json.NewDecoder(s).Decode(&req); err != nil {
logger.Err(err).Msg("native get peers: decode")
return
}
if req.Count <= 0 {
req.Count = 1
}
excludeSet := make(map[string]struct{}, len(req.Exclude))
for _, e := range req.Exclude {
excludeSet[e] = struct{}{}
}
common.StreamNativeMu.RLock()
candidates := make([]string, 0, len(common.StaticNatives))
for addr := range common.StaticNatives {
if _, excluded := excludeSet[addr]; !excluded {
candidates = append(candidates, addr)
}
}
common.StreamNativeMu.RUnlock()
rand.Shuffle(len(candidates), func(i, j int) { candidates[i], candidates[j] = candidates[j], candidates[i] })
if req.Count > len(candidates) {
req.Count = len(candidates)
}
resp := common.GetNativePeersResponse{Peers: candidates[:req.Count]}
if err := json.NewEncoder(s).Encode(resp); err != nil {
logger.Err(err).Msg("native get peers: encode response")
}
}
// StartNativeRegistration starts a goroutine that periodically registers this
// indexer with all configured native indexers (every RecommendedHeartbeatInterval).

View File

@@ -2,69 +2,101 @@ package indexer
import (
"context"
"oc-discovery/conf"
"oc-discovery/daemons/node/common"
"sync"
oclib "cloud.o-forge.io/core/oc-lib"
pp "cloud.o-forge.io/core/oc-lib/models/peer"
dht "github.com/libp2p/go-libp2p-kad-dht"
pubsub "github.com/libp2p/go-libp2p-pubsub"
record "github.com/libp2p/go-libp2p-record"
"github.com/libp2p/go-libp2p/core/host"
"github.com/libp2p/go-libp2p/core/peer"
pp "github.com/libp2p/go-libp2p/core/peer"
)
// Index Record is the model for the specialized registry of node connected to Indexer
// IndexerService manages the indexer node's state: stream records, DHT, pubsub.
type IndexerService struct {
*common.LongLivedStreamRecordedService[PeerRecord]
PS *pubsub.PubSub
DHT *dht.IpfsDHT
isStrictIndexer bool
mu sync.RWMutex
DisposedPeers map[peer.ID]*common.TopicNodeActivityPub
IsNative bool
Native *NativeState // non-nil when IsNative == true
nameIndex *nameIndexState
}
// if a pubsub is given... indexer is also an active oc-node. If not... your a strict indexer
func NewIndexerService(h host.Host, ps *pubsub.PubSub, maxNode int) *IndexerService {
// NewIndexerService creates an IndexerService.
// If ps is nil, this is a strict indexer (no pre-existing gossip sub from a node).
func NewIndexerService(h host.Host, ps *pubsub.PubSub, maxNode int, isNative bool) *IndexerService {
logger := oclib.GetLogger()
logger.Info().Msg("open indexer mode...")
var err error
ix := &IndexerService{
LongLivedStreamRecordedService: common.NewStreamRecordedService[PeerRecord](h, maxNode, false),
LongLivedStreamRecordedService: common.NewStreamRecordedService[PeerRecord](h, maxNode),
isStrictIndexer: ps == nil,
IsNative: isNative,
}
if ps == nil { // generate your fresh gossip for the flow of killed node... EVERYBODY should know !
if ps == nil {
ps, err = pubsub.NewGossipSub(context.Background(), ix.Host)
if err != nil {
panic(err) // can't run your indexer without a propalgation pubsub, of state of node.
panic(err) // can't run your indexer without a propagation pubsub
}
}
ix.PS = ps
// later TODO : all indexer laucnh a private replica of them self. DEV OPS
if ix.isStrictIndexer {
logger.Info().Msg("connect to indexers as strict indexer...")
common.ConnectToIndexers(h, 0, 5, ix.Host.ID()) // TODO : make var to change how many indexers are allowed.
logger.Info().Msg("subscribe to node activity as strict indexer...")
if ix.isStrictIndexer && !isNative {
logger.Info().Msg("connect to indexers as strict indexer...")
common.ConnectToIndexers(h, conf.GetConfig().MinIndexer, conf.GetConfig().MaxIndexer, ix.Host.ID())
logger.Info().Msg("subscribe to decentralized search flow as strict indexer...")
ix.SubscribeToSearch(ix.PS, nil)
go ix.SubscribeToSearch(ix.PS, nil)
}
f := func(ctx context.Context, evt common.TopicNodeActivityPub, _ string) {
ix.mu.Lock()
if evt.NodeActivity == pp.OFFLINE {
delete(ix.DisposedPeers, evt.Disposer.ID)
if !isNative {
logger.Info().Msg("init distributed name index...")
ix.initNameIndex(ps)
ix.LongLivedStreamRecordedService.AfterDelete = func(pid pp.ID, name, did string) {
ix.publishNameEvent(NameIndexDelete, name, pid.String(), did)
}
}
if ix.DHT, err = dht.New(
context.Background(),
ix.Host,
dht.Mode(dht.ModeServer),
dht.ProtocolPrefix("oc"), // 🔥 réseau privé
dht.Validator(record.NamespacedValidator{
"node": PeerRecordValidator{},
"indexer": IndexerRecordValidator{}, // for native indexer registry
"name": DefaultValidator{},
"pid": DefaultValidator{},
}),
); err != nil {
logger.Info().Msg(err.Error())
return nil
}
// InitNative must happen after DHT is ready
if isNative {
ix.InitNative()
} else {
ix.initNodeHandler()
// Register with configured natives so this indexer appears in their cache
if nativeAddrs := conf.GetConfig().NativeIndexerAddresses; nativeAddrs != "" {
common.StartNativeRegistration(ix.Host, nativeAddrs)
}
if evt.NodeActivity == pp.ONLINE {
ix.DisposedPeers[evt.Disposer.ID] = &evt
}
ix.mu.Unlock()
}
ix.SubscribeToNodeActivity(ix.PS, &f) // now we subscribe to a long run topic named node-activity, to relay message.
ix.initNodeHandler() // then listen up on every protocol expected
return ix
}
func (ix *IndexerService) Close() {
ix.DHT.Close()
ix.PS.UnregisterTopicValidator(common.TopicPubSubSearch)
if ix.nameIndex != nil {
ix.PS.UnregisterTopicValidator(TopicNameIndex)
}
for _, s := range ix.StreamRecords {
for _, ss := range s {
ss.Stream.Close()
ss.HeartbeatStream.Stream.Close()
}
}

View File

@@ -0,0 +1,64 @@
package indexer
import (
"encoding/json"
"errors"
"time"
)
type DefaultValidator struct{}
func (v DefaultValidator) Validate(key string, value []byte) error {
return nil
}
func (v DefaultValidator) Select(key string, values [][]byte) (int, error) {
return 0, nil
}
type PeerRecordValidator struct{}
func (v PeerRecordValidator) Validate(key string, value []byte) error {
var rec PeerRecord
if err := json.Unmarshal(value, &rec); err != nil {
return errors.New("invalid json")
}
// PeerID must exist
if rec.PeerID == "" {
return errors.New("missing peerID")
}
// Expiry check
if rec.ExpiryDate.Before(time.Now().UTC()) {
return errors.New("record expired")
}
// Signature verification
if _, err := rec.Verify(); err != nil {
return errors.New("invalid signature")
}
return nil
}
func (v PeerRecordValidator) Select(key string, values [][]byte) (int, error) {
var newest time.Time
index := 0
for i, val := range values {
var rec PeerRecord
if err := json.Unmarshal(val, &rec); err != nil {
continue
}
if rec.ExpiryDate.After(newest) {
newest = rec.ExpiryDate
index = i
}
}
return index, nil
}

View File

@@ -4,13 +4,102 @@ import (
"context"
"encoding/json"
"fmt"
"oc-discovery/daemons/node/common"
"oc-discovery/daemons/node/stream"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/config"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/tools"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
)
func ListenNATS(n Node) {
type configPayload struct {
PeerID string `json:"source_peer_id"`
}
type executionConsidersPayload struct {
PeerIDs []string `json:"peer_ids"`
}
func ListenNATS(n *Node) {
tools.NewNATSCaller().ListenNats(map[tools.NATSMethod]func(tools.NATSResponse){
/*tools.VERIFY_RESOURCE: func(resp tools.NATSResponse) {
if resp.FromApp == config.GetAppName() {
return
}
if res, err := resources.ToResource(resp.Datatype.EnumIndex(), resp.Payload); err == nil {
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
p := access.LoadOne(res.GetCreatorID())
realP := p.ToPeer()
if realP == nil {
return
} else if realP.Relation == peer.SELF {
pubKey, err := common.PubKeyFromString(realP.PublicKey) // extract pubkey from pubkey str
if err != nil {
return
}
ok, _ := pubKey.Verify(resp.Payload, res.GetSignature())
if b, err := json.Marshal(stream.Verify{
IsVerified: ok,
}); err == nil {
tools.NewNATSCaller().SetNATSPub(tools.VERIFY_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Method: int(tools.VERIFY_RESOURCE),
Payload: b,
})
}
} else if realP.Relation != peer.BLACKLIST {
n.StreamService.PublishVerifyResources(&resp.Datatype, resp.User, realP.PeerID, resp.Payload)
}
}
},*/
tools.CREATE_RESOURCE: func(resp tools.NATSResponse) {
if resp.FromApp == config.GetAppName() && resp.Datatype != tools.PEER && resp.Datatype != tools.WORKFLOW {
return
}
logger := oclib.GetLogger()
m := map[string]interface{}{}
err := json.Unmarshal(resp.Payload, &m)
if err != nil {
logger.Err(err)
return
}
p := &peer.Peer{}
p = p.Deserialize(m, p).(*peer.Peer)
ad, err := pp.AddrInfoFromString(p.StreamAddress)
if err != nil {
return
}
n.StreamService.Mu.Lock()
defer n.StreamService.Mu.Unlock()
if p.Relation == peer.PARTNER {
n.StreamService.ConnectToPartner(p.StreamAddress)
} else {
ps := common.ProtocolStream{}
for p, s := range n.StreamService.Streams {
m := map[pp.ID]*common.Stream{}
for k := range s {
if ad.ID != k {
m[k] = s[k]
} else {
s[k].Stream.Close()
}
}
ps[p] = m
}
n.StreamService.Streams = ps
}
},
tools.PROPALGATION_EVENT: func(resp tools.NATSResponse) {
fmt.Println("PROPALGATION")
if resp.FromApp == config.GetAppName() {
return
}
var propalgation tools.PropalgationMessage
err := json.Unmarshal(resp.Payload, &propalgation)
var dt *tools.DataType
@@ -18,27 +107,117 @@ func ListenNATS(n Node) {
dtt := tools.DataType(propalgation.DataType)
dt = &dtt
}
fmt.Println("PROPALGATION ACT", propalgation.Action, propalgation.Action == tools.PB_CREATE, err)
if err == nil {
switch propalgation.Action {
case tools.PB_CREATE:
case tools.PB_UPDATE:
case tools.PB_DELETE:
n.StreamService.ToPartnerPublishEvent(
case tools.PB_ADMIRALTY_CONFIG, tools.PB_MINIO_CONFIG:
var m configPayload
var proto protocol.ID = stream.ProtocolAdmiraltyConfigResource
if propalgation.Action == tools.PB_MINIO_CONFIG {
proto = stream.ProtocolMinioConfigResource
}
if err := json.Unmarshal(resp.Payload, &m); err == nil {
peers, _ := n.GetPeerRecord(context.Background(), m.PeerID)
for _, p := range peers {
n.StreamService.PublishCommon(&resp.Datatype, resp.User,
p.PeerID, proto, resp.Payload)
}
}
case tools.PB_CREATE, tools.PB_UPDATE, tools.PB_DELETE:
fmt.Println(propalgation.Action, dt, resp.User, propalgation.Payload)
fmt.Println(n.StreamService.ToPartnerPublishEvent(
context.Background(),
propalgation.Action,
dt, resp.User,
propalgation.Payload,
)
case tools.PB_SEARCH:
))
case tools.PB_CONSIDERS:
switch resp.Datatype {
case tools.BOOKING, tools.PURCHASE_RESOURCE, tools.WORKFLOW_EXECUTION:
var m executionConsidersPayload
if err := json.Unmarshal(resp.Payload, &m); err == nil {
for _, p := range m.PeerIDs {
peers, _ := n.GetPeerRecord(context.Background(), p)
for _, pp := range peers {
n.StreamService.PublishCommon(&resp.Datatype, resp.User,
pp.PeerID, stream.ProtocolConsidersResource, resp.Payload)
}
}
}
default:
// minio / admiralty config considers — route back to OriginID.
var m struct {
OriginID string `json:"origin_id"`
}
if err := json.Unmarshal(propalgation.Payload, &m); err == nil && m.OriginID != "" {
peers, _ := n.GetPeerRecord(context.Background(), m.OriginID)
for _, p := range peers {
n.StreamService.PublishCommon(nil, resp.User,
p.PeerID, stream.ProtocolConsidersResource, propalgation.Payload)
}
}
}
case tools.PB_PLANNER:
m := map[string]interface{}{}
json.Unmarshal(propalgation.Payload, &m)
n.PubSubService.SearchPublishEvent(
context.Background(),
dt,
fmt.Sprintf("%v", m["type"]),
resp.User,
fmt.Sprintf("%v", m["search"]),
)
if err := json.Unmarshal(resp.Payload, &m); err == nil {
b := []byte{}
if len(m) > 1 {
b = resp.Payload
}
if m["peer_id"] == nil { // send to every active stream
n.StreamService.Mu.Lock()
if n.StreamService.Streams[stream.ProtocolSendPlanner] != nil {
for pid := range n.StreamService.Streams[stream.ProtocolSendPlanner] {
n.StreamService.PublishCommon(nil, resp.User, pid.String(), stream.ProtocolSendPlanner, b)
}
}
} else {
n.StreamService.PublishCommon(nil, resp.User, fmt.Sprintf("%v", m["peer_id"]), stream.ProtocolSendPlanner, b)
}
n.StreamService.Mu.Unlock()
}
case tools.PB_CLOSE_PLANNER:
m := map[string]interface{}{}
if err := json.Unmarshal(resp.Payload, &m); err == nil {
n.StreamService.Mu.Lock()
if pid, err := pp.Decode(fmt.Sprintf("%v", m["peer_id"])); err == nil {
if n.StreamService.Streams[stream.ProtocolSendPlanner] != nil && n.StreamService.Streams[stream.ProtocolSendPlanner][pid] != nil {
n.StreamService.Streams[stream.ProtocolSendPlanner][pid].Stream.Close()
delete(n.StreamService.Streams[stream.ProtocolSendPlanner], pid)
}
}
n.StreamService.Mu.Unlock()
}
case tools.PB_SEARCH:
if propalgation.DataType == int(tools.PEER) {
m := map[string]interface{}{}
if err := json.Unmarshal(propalgation.Payload, &m); err == nil {
if peers, err := n.GetPeerRecord(context.Background(), fmt.Sprintf("%v", m["search"])); err == nil {
for _, p := range peers {
if b, err := json.Marshal(p); err == nil {
go tools.NewNATSCaller().SetNATSPub(tools.SEARCH_EVENT, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(tools.PEER),
Method: int(tools.SEARCH_EVENT),
Payload: b,
})
}
}
}
}
} else {
m := map[string]interface{}{}
if err := json.Unmarshal(propalgation.Payload, &m); err == nil {
n.PubSubService.SearchPublishEvent(
context.Background(),
dt,
fmt.Sprintf("%v", m["type"]),
resp.User,
fmt.Sprintf("%v", m["search"]),
)
}
}
}
}
},

View File

@@ -2,24 +2,28 @@ package node
import (
"context"
"crypto/sha256"
"encoding/json"
"errors"
"fmt"
"maps"
"oc-discovery/conf"
"oc-discovery/daemons/node/common"
"oc-discovery/daemons/node/indexer"
"oc-discovery/daemons/node/pubsub"
"oc-discovery/daemons/node/stream"
"sync"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/tools"
"github.com/google/uuid"
"github.com/libp2p/go-libp2p"
pubsubs "github.com/libp2p/go-libp2p-pubsub"
"github.com/libp2p/go-libp2p/core/crypto"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
)
type Node struct {
@@ -30,15 +34,18 @@ type Node struct {
StreamService *stream.StreamService
PeerID pp.ID
isIndexer bool
peerRecord *indexer.PeerRecord
Mu sync.RWMutex
}
func InitNode(isNode bool, isIndexer bool) (*Node, error) {
func InitNode(isNode bool, isIndexer bool, isNativeIndexer bool) (*Node, error) {
if !isNode && !isIndexer {
return nil, errors.New("wait... what ? your node need to at least something. Retry we can't be friend in that case")
}
logger := oclib.GetLogger()
logger.Info().Msg("retrieving private key...")
priv, err := common.LoadKeyFromFilePrivate() // your node private key
priv, err := tools.LoadKeyFromFilePrivate() // your node private key
if err != nil {
return nil, err
}
@@ -62,8 +69,11 @@ func InitNode(isNode bool, isIndexer bool) (*Node, error) {
node := &Node{
PeerID: h.ID(),
isIndexer: isIndexer,
LongLivedStreamRecordedService: common.NewStreamRecordedService[interface{}](h, 1000, false),
LongLivedStreamRecordedService: common.NewStreamRecordedService[interface{}](h, 1000),
}
// Register the bandwidth probe handler so any peer measuring this node's
// throughput can open a dedicated probe stream and read the echo.
h.SetStreamHandler(common.ProtocolBandwidthProbe, common.HandleBandwidthProbe)
var ps *pubsubs.PubSub
if isNode {
logger.Info().Msg("generate opencloud node...")
@@ -72,8 +82,30 @@ func InitNode(isNode bool, isIndexer bool) (*Node, error) {
panic(err) // can't run your node without a propalgation pubsub, of state of node.
}
node.PS = ps
// buildRecord returns a fresh signed PeerRecord as JSON, embedded in each
// heartbeat so the receiving indexer can republish it to the DHT directly.
// peerRecord is nil until claimInfo runs, so the first ~20s heartbeats carry
// no record — that's fine, claimInfo publishes once synchronously at startup.
buildRecord := func() json.RawMessage {
if node.peerRecord == nil {
return nil
}
priv, err := tools.LoadKeyFromFilePrivate()
if err != nil {
return nil
}
fresh := *node.peerRecord
fresh.PeerRecordPayload.ExpiryDate = time.Now().UTC().Add(2 * time.Minute)
payload, _ := json.Marshal(fresh.PeerRecordPayload)
fresh.Signature, err = priv.Sign(payload)
if err != nil {
return nil
}
b, _ := json.Marshal(fresh)
return json.RawMessage(b)
}
logger.Info().Msg("connect to indexers...")
common.ConnectToIndexers(node.Host, 0, 5, node.PeerID) // TODO : make var to change how many indexers are allowed.
common.ConnectToIndexers(node.Host, conf.GetConfig().MinIndexer, conf.GetConfig().MaxIndexer, node.PeerID, buildRecord)
logger.Info().Msg("claims my node...")
if _, err := node.claimInfo(conf.GetConfig().Name, conf.GetConfig().Hostname); err != nil {
panic(err)
@@ -95,14 +127,14 @@ func InitNode(isNode bool, isIndexer bool) (*Node, error) {
}
}
node.SubscribeToSearch(node.PS, &f)
logger.Info().Msg("connect to NATS")
go ListenNATS(node)
logger.Info().Msg("Node is actually running.")
}
if isIndexer {
logger.Info().Msg("generate opencloud indexer...")
node.IndexerService = indexer.NewIndexerService(node.Host, ps, 5)
node.IndexerService = indexer.NewIndexerService(node.Host, ps, 500, isNativeIndexer)
}
logger.Info().Msg("connect to NATS")
ListenNATS(*node)
logger.Info().Msg("Node is actually running.")
return node, nil
}
@@ -118,30 +150,33 @@ func (d *Node) Close() {
func (d *Node) publishPeerRecord(
rec *indexer.PeerRecord,
) error {
priv, err := common.LoadKeyFromFilePrivate() // your node private key
priv, err := tools.LoadKeyFromFilePrivate() // your node private key
if err != nil {
return err
}
if common.StreamIndexers[common.ProtocolPublish] == nil {
return errors.New("no protocol Publish is set up on the node")
}
common.StreamMuIndexes.RLock()
indexerSnapshot := make([]*pp.AddrInfo, 0, len(common.StaticIndexers))
for _, ad := range common.StaticIndexers {
if common.StreamIndexers[common.ProtocolPublish][ad.ID] == nil {
return errors.New("no protocol Publish for peer " + ad.ID.String() + " is set up on the node")
indexerSnapshot = append(indexerSnapshot, ad)
}
common.StreamMuIndexes.RUnlock()
for _, ad := range indexerSnapshot {
var err error
if common.StreamIndexers, err = common.TempStream(d.Host, *ad, common.ProtocolPublish, "", common.StreamIndexers, map[protocol.ID]*common.ProtocolInfo{},
&common.StreamMuIndexes); err != nil {
continue
}
stream := common.StreamIndexers[common.ProtocolPublish][ad.ID]
base := indexer.PeerRecord{
base := indexer.PeerRecordPayload{
Name: rec.Name,
DID: rec.DID,
PubKey: rec.PubKey,
ExpiryDate: time.Now().UTC().Add(2 * time.Minute),
}
payload, _ := json.Marshal(base)
hash := sha256.Sum256(payload)
rec.ExpiryDate = base.ExpiryDate
rec.Signature, err = priv.Sign(hash[:])
rec.TTL = 2
rec.PeerRecordPayload = base
rec.Signature, err = priv.Sign(payload)
if err := json.NewEncoder(stream.Stream).Encode(&rec); err != nil { // then publish on stream
return err
}
@@ -151,29 +186,52 @@ func (d *Node) publishPeerRecord(
func (d *Node) GetPeerRecord(
ctx context.Context,
key string,
pidOrdid string,
) ([]*peer.Peer, error) {
var err error
var info map[string]indexer.PeerRecord
if common.StreamIndexers[common.ProtocolPublish] == nil {
return nil, errors.New("no protocol Publish is set up on the node")
}
common.StreamMuIndexes.RLock()
indexerSnapshot2 := make([]*pp.AddrInfo, 0, len(common.StaticIndexers))
for _, ad := range common.StaticIndexers {
if common.StreamIndexers[common.ProtocolPublish][ad.ID] == nil {
return nil, errors.New("no protocol Publish for peer " + ad.ID.String() + " is set up on the node")
}
stream := common.StreamIndexers[common.ProtocolPublish][ad.ID]
if err := json.NewEncoder(stream.Stream).Encode(indexer.GetValue{Key: key}); err != nil {
return nil, err
}
indexerSnapshot2 = append(indexerSnapshot2, ad)
}
common.StreamMuIndexes.RUnlock()
for {
var resp indexer.GetResponse
if err := json.NewDecoder(stream.Stream).Decode(&resp); err != nil {
return nil, err
}
if resp.Found {
// Build the GetValue request: if pidOrdid is neither a UUID DID nor a libp2p
// PeerID, treat it as a human-readable name and let the indexer resolve it.
getReq := indexer.GetValue{Key: pidOrdid}
isNameSearch := false
if pidR, pidErr := pp.Decode(pidOrdid); pidErr == nil {
getReq.PeerID = pidR
} else if _, uuidErr := uuid.Parse(pidOrdid); uuidErr != nil {
// Not a UUID DID → treat pidOrdid as a name substring search.
getReq.Name = pidOrdid
getReq.Key = ""
isNameSearch = true
}
for _, ad := range indexerSnapshot2 {
if common.StreamIndexers, err = common.TempStream(d.Host, *ad, common.ProtocolGet, "",
common.StreamIndexers, map[protocol.ID]*common.ProtocolInfo{}, &common.StreamMuIndexes); err != nil {
continue
}
stream := common.StreamIndexers[common.ProtocolGet][ad.ID]
if err := json.NewEncoder(stream.Stream).Encode(getReq); err != nil {
continue
}
var resp indexer.GetResponse
if err := json.NewDecoder(stream.Stream).Decode(&resp); err != nil {
continue
}
if resp.Found {
if info == nil {
info = resp.Records
} else {
// Aggregate results from all indexers for name searches.
maps.Copy(info, resp.Records)
}
// For exact lookups (PeerID / DID) stop at the first hit.
if !isNameSearch {
break
}
}
@@ -182,7 +240,7 @@ func (d *Node) GetPeerRecord(
for _, pr := range info {
if pk, err := pr.Verify(); err != nil {
return nil, err
} else if ok, p, err := pr.ExtractPeer(d.PeerID.String(), key, pk); err != nil {
} else if ok, p, err := pr.ExtractPeer(d.PeerID.String(), pr.PeerID, pk); err != nil {
return nil, err
} else {
if ok {
@@ -202,12 +260,21 @@ func (d *Node) claimInfo(
if endPoint == "" {
return nil, errors.New("no endpoint found for peer")
}
peerID := uuid.New().String()
priv, err := common.LoadKeyFromFilePrivate()
did := uuid.New().String()
peers := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil).Search(&dbs.Filters{
And: map[string][]dbs.Filter{ // search by name if no filters are provided
"peer_id": {{Operator: dbs.EQUAL.String(), Value: d.Host.ID().String()}},
},
}, "", false)
if len(peers.Data) > 0 {
did = peers.Data[0].GetID() // if already existing set up did as made
}
priv, err := tools.LoadKeyFromFilePrivate()
if err != nil {
return nil, err
}
pub, err := common.LoadKeyFromFilePublic()
pub, err := tools.LoadKeyFromFilePublic()
if err != nil {
return nil, err
}
@@ -219,37 +286,52 @@ func (d *Node) claimInfo(
now := time.Now().UTC()
expiry := now.Add(150 * time.Second)
rec := &indexer.PeerRecord{
Name: name,
DID: peerID, // REAL PEER ID
PubKey: pubBytes,
pRec := indexer.PeerRecordPayload{
Name: name,
DID: did, // REAL PEER ID
PubKey: pubBytes,
ExpiryDate: expiry,
}
rec.PeerID = d.Host.ID().String()
d.PeerID = d.Host.ID()
payload, _ := json.Marshal(pRec)
payload, _ := json.Marshal(rec)
hash := sha256.Sum256(payload)
rec.Signature, err = priv.Sign(hash[:])
rec := &indexer.PeerRecord{
PeerRecordPayload: pRec,
}
rec.Signature, err = priv.Sign(payload)
if err != nil {
return nil, err
}
rec.PeerID = d.Host.ID().String()
rec.APIUrl = endPoint
rec.StreamAddress = "/ip4/" + conf.GetConfig().Hostname + " /tcp/" + fmt.Sprintf("%v", conf.GetConfig().NodeEndpointPort) + " /p2p/" + rec.PeerID
rec.StreamAddress = "/ip4/" + conf.GetConfig().Hostname + "/tcp/" + fmt.Sprintf("%v", conf.GetConfig().NodeEndpointPort) + "/p2p/" + rec.PeerID
rec.NATSAddress = oclib.GetConfig().NATSUrl
rec.WalletAddress = "my-wallet"
rec.ExpiryDate = expiry
if err := d.publishPeerRecord(rec); err != nil {
return nil, err
}
/*if pk, err := rec.Verify(); err != nil {
fmt.Println("Verify")
d.peerRecord = rec
if _, err := rec.Verify(); err != nil {
return nil, err
} else {*/
_, p, err := rec.ExtractPeer(peerID, peerID, pub)
return p, err
//}
} else {
_, p, err := rec.ExtractPeer(did, did, pub)
return p, err
}
}
/*
TODO:
- Le booking est un flow neuf décentralisé :
On check on attend une réponse, on valide, il passe par discovery, on relais.
- Le shared workspace est une affaire de décentralisation,
on communique avec les shared les mouvements
- Un shared remplace la notion de partnership à l'échelle de partnershipping
-> quand on share un workspace on devient partenaire temporaire
qu'on le soit originellement ou non.
-> on a alors les mêmes privilèges.
- Les orchestrations admiralty ont le même fonctionnement.
Un evenement provoque alors une création de clé de service.
On doit pouvoir crud avec verification de signature un DBobject.
*/

View File

@@ -20,7 +20,7 @@ func (ps *PubSubService) handleEventSearch( // only : on partner followings. 3 c
evt *common.Event,
action tools.PubSubAction,
) error {
if !(action == tools.PB_SEARCH_RESPONSE || action == tools.PB_SEARCH) {
if !(action == tools.PB_SEARCH) {
return nil
}
if p, err := ps.Node.GetPeerRecord(ctx, evt.From); err == nil && len(p) > 0 { // peerFrom is Unique
@@ -32,7 +32,6 @@ func (ps *PubSubService) handleEventSearch( // only : on partner followings. 3 c
if err := ps.StreamService.SendResponse(p[0], evt); err != nil {
return err
}
default:
return nil
}

View File

@@ -4,62 +4,57 @@ import (
"context"
"encoding/json"
"errors"
"oc-discovery/daemons/node/common"
"oc-discovery/daemons/node/stream"
"oc-discovery/models"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/tools"
)
func (ps *PubSubService) SearchPublishEvent(
ctx context.Context, dt *tools.DataType, typ string, user string, search string) error {
b, err := json.Marshal(map[string]string{"search": search})
if err != nil {
return err
}
switch typ {
case "known": // define Search Strategy
return ps.StreamService.SearchKnownPublishEvent(dt, user, search) //if partners focus only them*/
return ps.StreamService.PublishesCommon(dt, user, &dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
And: map[string][]dbs.Filter{
"": {{Operator: dbs.NOT.String(), Value: dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
And: map[string][]dbs.Filter{
"relation": {{Operator: dbs.EQUAL.String(), Value: peer.BLACKLIST}},
},
}}},
},
}, b, stream.ProtocolSearchResource) //if partners focus only them*/
case "partner": // define Search Strategy
return ps.StreamService.SearchPartnersPublishEvent(dt, user, search) //if partners focus only them*/
return ps.StreamService.PublishesCommon(dt, user, &dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
And: map[string][]dbs.Filter{
"relation": {{Operator: dbs.EQUAL.String(), Value: peer.PARTNER}},
},
}, b, stream.ProtocolSearchResource)
case "all": // Gossip PubSub
b, err := json.Marshal(map[string]string{"search": search})
if err != nil {
return err
}
return ps.searchPublishEvent(ctx, dt, user, b)
return ps.publishEvent(ctx, dt, tools.PB_SEARCH, user, b)
default:
return errors.New("no type of research found")
}
}
func (ps *PubSubService) searchPublishEvent(
ctx context.Context, dt *tools.DataType, user string, payload []byte) error {
id, err := oclib.GenerateNodeID()
if err != nil {
return err
}
if err := ps.subscribeEvents(ctx, dt, tools.PB_SEARCH_RESPONSE, id, 60); err != nil { // TODO Catpure Event !
return err
}
return ps.publishEvent(ctx, dt, tools.PB_SEARCH, user, "", payload, false)
}
func (ps *PubSubService) publishEvent(
ctx context.Context, dt *tools.DataType, action tools.PubSubAction, user string,
peerID string, payload []byte, chanNamedByDt bool,
ctx context.Context, dt *tools.DataType, action tools.PubSubAction, user string, payload []byte,
) error {
name := action.String() + "#" + peerID
if chanNamedByDt && dt != nil { // if a datatype is precised then : app.action.datatype#peerID
name = action.String() + "." + (*dt).String() + "#" + peerID
}
from, err := oclib.GenerateNodeID()
priv, err := tools.LoadKeyFromFilePrivate()
if err != nil {
return err
}
priv, err := common.LoadKeyFromFilePrivate()
if err != nil {
return err
}
msg, _ := json.Marshal(models.NewEvent(name, from, dt, user, payload, priv))
topic, err := ps.PS.Join(name)
msg, _ := json.Marshal(models.NewEvent(action.String(), ps.Host.ID().String(), dt, user, payload, priv))
topic, err := ps.PS.Join(action.String())
if err != nil {
return err
}

View File

@@ -10,7 +10,7 @@ import (
)
func (ps *PubSubService) initSubscribeEvents(ctx context.Context) error {
if err := ps.subscribeEvents(ctx, nil, tools.PB_SEARCH, "", -1); err != nil {
if err := ps.subscribeEvents(ctx, nil, tools.PB_SEARCH, ""); err != nil {
return err
}
return nil
@@ -18,7 +18,7 @@ func (ps *PubSubService) initSubscribeEvents(ctx context.Context) error {
// generic function to subscribe to DHT flow of event
func (ps *PubSubService) subscribeEvents(
ctx context.Context, dt *tools.DataType, action tools.PubSubAction, peerID string, timeout int,
ctx context.Context, dt *tools.DataType, action tools.PubSubAction, peerID string,
) error {
logger := oclib.GetLogger()
// define a name app.action#peerID

View File

@@ -2,34 +2,118 @@ package stream
import (
"context"
"crypto/subtle"
"encoding/json"
"errors"
"fmt"
"oc-discovery/daemons/node/common"
"strings"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/models/booking/planner"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/models/resources"
"cloud.o-forge.io/core/oc-lib/tools"
)
func (ps *StreamService) getTopicName(topicName string) tools.PubSubAction {
ns := strings.Split(topicName, ".")
if len(ns) > 0 {
return tools.GetActionString(ns[0])
}
return tools.NONE
type Verify struct {
IsVerified bool `json:"is_verified"`
}
func (ps *StreamService) handleEvent(topicName string, evt *common.Event) error {
action := ps.getTopicName(topicName)
if err := ps.handleEventFromPartner(evt, action); err != nil {
return err
func (ps *StreamService) handleEvent(protocol string, evt *common.Event) error {
fmt.Println("handleEvent")
ps.handleEventFromPartner(evt, protocol)
/*if protocol == ProtocolVerifyResource {
if evt.DataType == -1 {
tools.NewNATSCaller().SetNATSPub(tools.VERIFY_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Method: int(tools.VERIFY_RESOURCE),
Payload: evt.Payload,
})
} else if err := ps.verifyResponse(evt); err != nil {
return err
}
}*/
if protocol == ProtocolSendPlanner {
if err := ps.sendPlanner(evt); err != nil {
return err
}
}
if action == tools.PB_SEARCH_RESPONSE {
if protocol == ProtocolSearchResource && evt.DataType > -1 {
if err := ps.retrieveResponse(evt); err != nil {
return err
}
}
if protocol == ProtocolConsidersResource {
if err := ps.pass(evt, tools.PB_CONSIDERS); err != nil {
return err
}
}
if protocol == ProtocolAdmiraltyConfigResource {
if err := ps.pass(evt, tools.PB_ADMIRALTY_CONFIG); err != nil {
return err
}
}
if protocol == ProtocolMinioConfigResource {
if err := ps.pass(evt, tools.PB_MINIO_CONFIG); err != nil {
return err
}
}
return errors.New("no action authorized available : " + protocol)
}
func (abs *StreamService) verifyResponse(event *common.Event) error { //
res, err := resources.ToResource(int(event.DataType), event.Payload)
if err != nil || res == nil {
return nil
}
verify := Verify{
IsVerified: false,
}
access := oclib.NewRequestAdmin(oclib.LibDataEnum(event.DataType), nil)
data := access.LoadOne(res.GetID())
if data.Err == "" && data.Data != nil {
if b, err := json.Marshal(data.Data); err == nil {
if res2, err := resources.ToResource(int(event.DataType), b); err == nil {
verify.IsVerified = subtle.ConstantTimeCompare(res.GetSignature(), res2.GetSignature()) == 1
}
}
}
if b, err := json.Marshal(verify); err == nil {
abs.PublishCommon(nil, "", event.From, ProtocolVerifyResource, b)
}
return nil
}
func (abs *StreamService) sendPlanner(event *common.Event) error { //
if len(event.Payload) == 0 {
if plan, err := planner.GenerateShallow(&tools.APIRequest{Admin: true}); err == nil {
if b, err := json.Marshal(plan); err == nil {
abs.PublishCommon(nil, event.User, event.From, ProtocolSendPlanner, b)
} else {
return err
}
} else {
m := map[string]interface{}{}
if err := json.Unmarshal(event.Payload, &m); err == nil {
m["peer_id"] = event.From
if pl, err := json.Marshal(m); err == nil {
if b, err := json.Marshal(tools.PropalgationMessage{
DataType: -1,
Action: tools.PB_PLANNER,
Payload: pl,
}); err == nil {
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(oclib.BOOKING),
Method: int(tools.PROPALGATION_EVENT),
Payload: b,
})
}
}
}
}
} else {
}
return nil
}
@@ -40,55 +124,62 @@ func (abs *StreamService) retrieveResponse(event *common.Event) error { //
return nil
}
b, err := json.Marshal(res.Serialize(res))
go tools.NewNATSCaller().SetNATSPub(tools.CATALOG_SEARCH_EVENT, tools.NATSResponse{
go tools.NewNATSCaller().SetNATSPub(tools.SEARCH_EVENT, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(event.DataType),
Method: int(tools.CATALOG_SEARCH_EVENT),
Method: int(tools.SEARCH_EVENT),
Payload: b,
})
return nil
}
func (ps *StreamService) handleEventFromPartner(evt *common.Event, action tools.PubSubAction) error {
if !(action == tools.PB_CREATE || action == tools.PB_UPDATE || action == tools.PB_DELETE) {
return nil
func (abs *StreamService) pass(event *common.Event, action tools.PubSubAction) error { //
if b, err := json.Marshal(&tools.PropalgationMessage{
Action: action,
DataType: int(event.DataType),
Payload: event.Payload,
}); err == nil {
go tools.NewNATSCaller().SetNATSPub(tools.PROPALGATION_EVENT, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(event.DataType),
Method: int(tools.PROPALGATION_EVENT),
Payload: b,
})
}
resource, err := resources.ToResource(int(evt.DataType), evt.Payload)
if err != nil {
return err
}
b, err := json.Marshal(resource)
if err != nil {
return err
}
switch action {
case tools.PB_SEARCH:
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
peers := access.Search(nil, evt.From, false)
if len(peers.Data) > 0 {
p := peers.Data[0].(*peer.Peer)
// TODO : something if peer is missing in our side !
ps.SendResponse(p, evt)
} else if p, err := ps.Node.GetPeerRecord(context.Background(), evt.From); err == nil && len(p) > 0 { // peer from is peerID
ps.SendResponse(p[0], evt)
return nil
}
func (ps *StreamService) handleEventFromPartner(evt *common.Event, protocol string) error {
switch protocol {
case ProtocolSearchResource:
if evt.DataType < 0 {
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
peers := access.Search(nil, evt.From, false)
if len(peers.Data) > 0 {
p := peers.Data[0].(*peer.Peer)
// TODO : something if peer is missing in our side !
ps.SendResponse(p, evt)
} else if p, err := ps.Node.GetPeerRecord(context.Background(), evt.From); err == nil && len(p) > 0 { // peer from is peerID
ps.SendResponse(p[0], evt)
}
}
case tools.PB_CREATE:
case tools.PB_UPDATE:
case ProtocolCreateResource, ProtocolUpdateResource:
fmt.Println("RECEIVED Protocol.Update")
go tools.NewNATSCaller().SetNATSPub(tools.CREATE_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(evt.DataType),
Method: int(tools.CREATE_RESOURCE),
Payload: b,
Payload: evt.Payload,
})
case tools.PB_DELETE:
case ProtocolDeleteResource:
go tools.NewNATSCaller().SetNATSPub(tools.REMOVE_RESOURCE, tools.NATSResponse{
FromApp: "oc-discovery",
Datatype: tools.DataType(evt.DataType),
Method: int(tools.REMOVE_RESOURCE),
Payload: b,
Payload: evt.Payload,
})
default:
return errors.New("no action authorized available : " + action.String())
return errors.New("no action authorized available : " + protocol)
}
return nil
}
@@ -96,8 +187,12 @@ func (ps *StreamService) handleEventFromPartner(evt *common.Event, action tools.
func (abs *StreamService) SendResponse(p *peer.Peer, event *common.Event) error {
dts := []oclib.LibDataEnum{oclib.LibDataEnum(event.DataType)}
if event.DataType == -1 { // expect all resources
dts = []oclib.LibDataEnum{oclib.LibDataEnum(oclib.COMPUTE_RESOURCE), oclib.LibDataEnum(oclib.STORAGE_RESOURCE),
oclib.LibDataEnum(oclib.PROCESSING_RESOURCE), oclib.LibDataEnum(oclib.DATA_RESOURCE), oclib.LibDataEnum(oclib.WORKFLOW_RESOURCE)}
dts = []oclib.LibDataEnum{
oclib.LibDataEnum(oclib.COMPUTE_RESOURCE),
oclib.LibDataEnum(oclib.STORAGE_RESOURCE),
oclib.LibDataEnum(oclib.PROCESSING_RESOURCE),
oclib.LibDataEnum(oclib.DATA_RESOURCE),
oclib.LibDataEnum(oclib.WORKFLOW_RESOURCE)}
}
var m map[string]string
err := json.Unmarshal(event.Payload, &m)
@@ -112,9 +207,9 @@ func (abs *StreamService) SendResponse(p *peer.Peer, event *common.Event) error
if j, err := json.Marshal(ss); err == nil {
if event.DataType != -1 {
ndt := tools.DataType(dt.EnumIndex())
abs.PublishResources(&ndt, event.User, peerID, j)
abs.PublishCommon(&ndt, event.User, peerID, ProtocolSearchResource, j)
} else {
abs.PublishResources(nil, event.User, peerID, j)
abs.PublishCommon(nil, event.User, peerID, ProtocolSearchResource, j)
}
}
}

View File

@@ -6,78 +6,54 @@ import (
"errors"
"fmt"
"oc-discovery/daemons/node/common"
"time"
oclib "cloud.o-forge.io/core/oc-lib"
"cloud.o-forge.io/core/oc-lib/dbs"
"cloud.o-forge.io/core/oc-lib/models/peer"
"cloud.o-forge.io/core/oc-lib/tools"
"github.com/libp2p/go-libp2p/core/network"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
)
func (ps *StreamService) PublishResources(dt *tools.DataType, user string, toPeerID string, resource []byte) error {
func (ps *StreamService) PublishesCommon(dt *tools.DataType, user string, filter *dbs.Filters, resource []byte, protos ...protocol.ID) error {
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
p := access.LoadOne(toPeerID)
if p.Err != "" {
return errors.New(p.Err)
} else {
ad, err := pp.AddrInfoFromString(p.Data.(*peer.Peer).StreamAddress)
if err != nil {
return err
p := access.Search(filter, "", false)
for _, pes := range p.Data {
for _, proto := range protos {
if _, err := ps.PublishCommon(dt, user, pes.(*peer.Peer).PeerID, proto, resource); err != nil {
return err
}
}
ps.write(tools.PB_SEARCH, toPeerID, ad, dt, user, resource, ProtocolSearchResource, p.Data.(*peer.Peer).Relation == peer.PARTNER)
}
return nil
}
func (ps *StreamService) SearchKnownPublishEvent(dt *tools.DataType, user string, search string) error {
func (ps *StreamService) PublishCommon(dt *tools.DataType, user string, toPeerID string, proto protocol.ID, resource []byte) (*common.Stream, error) {
fmt.Println("PublishCommon")
if toPeerID == ps.Key.String() {
return nil, errors.New("Can't send to ourself !")
}
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
peers := access.Search(&dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
And: map[string][]dbs.Filter{
"": {{Operator: dbs.NOT.String(), Value: dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
And: map[string][]dbs.Filter{
"relation": {{Operator: dbs.EQUAL.String(), Value: peer.BLACKLIST}},
},
}}},
p := access.Search(&dbs.Filters{
And: map[string][]dbs.Filter{ // search by name if no filters are provided
"peer_id": {{Operator: dbs.EQUAL.String(), Value: toPeerID}},
},
}, search, false)
if peers.Err != "" {
return errors.New(peers.Err)
} else {
b, err := json.Marshal(map[string]string{"search": search})
if err != nil {
return err
}
for _, p := range peers.Data {
ad, err := pp.AddrInfoFromString(p.(*peer.Peer).StreamAddress)
if err != nil {
continue
}
ps.write(tools.PB_SEARCH, p.GetID(), ad, dt, user, b, ProtocolSearchResource, p.(*peer.Peer).Relation == peer.PARTNER)
}
}, toPeerID, false)
var pe *peer.Peer
if len(p.Data) > 0 && p.Data[0].(*peer.Peer).Relation != peer.BLACKLIST {
pe = p.Data[0].(*peer.Peer)
} else if pps, err := ps.Node.GetPeerRecord(context.Background(), toPeerID); err == nil && len(pps) > 0 {
pe = pps[0]
}
return nil
}
func (ps *StreamService) SearchPartnersPublishEvent(dt *tools.DataType, user string, search string) error {
if peers, err := ps.searchPeer(fmt.Sprintf("%v", peer.PARTNER.EnumIndex())); err != nil {
return err
} else {
b, err := json.Marshal(map[string]string{"search": search})
if pe != nil {
ad, err := pp.AddrInfoFromString(p.Data[0].(*peer.Peer).StreamAddress)
if err != nil {
return err
}
for _, p := range peers {
ad, err := pp.AddrInfoFromString(p.StreamAddress)
if err != nil {
continue
}
ps.write(tools.PB_SEARCH, p.GetID(), ad, dt, user, b, ProtocolSearchResource, true)
return nil, err
}
return ps.write(toPeerID, ad, dt, user, resource, proto)
}
return nil
return nil, errors.New("peer unvalid " + toPeerID)
}
func (ps *StreamService) ToPartnerPublishEvent(
@@ -87,102 +63,79 @@ func (ps *StreamService) ToPartnerPublishEvent(
if err := json.Unmarshal(payload, &p); err != nil {
return err
}
ad, err := pp.AddrInfoFromString(p.StreamAddress)
pid, err := pp.Decode(p.PeerID)
if err != nil {
return err
}
ps.mu.Lock()
defer ps.mu.Unlock()
if p.Relation == peer.PARTNER {
if ps.Streams[ProtocolHeartbeatPartner] == nil {
ps.Streams[ProtocolHeartbeatPartner] = map[pp.ID]*common.Stream{}
}
ps.ConnectToPartner(ad.ID, ad)
} else if ps.Streams[ProtocolHeartbeatPartner] != nil && ps.Streams[ProtocolHeartbeatPartner][ad.ID] != nil {
for _, pids := range ps.Streams {
if pids[ad.ID] != nil {
delete(pids, ad.ID)
if pe, err := oclib.GetMySelf(); err != nil {
return err
} else if pe.GetID() == p.GetID() {
return fmt.Errorf("can't send to ourself")
} else {
pe.Relation = p.Relation
pe.Verify = false
if b2, err := json.Marshal(pe); err == nil {
if _, err := ps.PublishCommon(dt, user, p.PeerID, ProtocolUpdateResource, b2); err != nil {
return err
}
if p.Relation == peer.PARTNER {
if ps.Streams[ProtocolHeartbeatPartner] == nil {
ps.Streams[ProtocolHeartbeatPartner] = map[pp.ID]*common.Stream{}
}
fmt.Println("SHOULD CONNECT")
ps.ConnectToPartner(p.StreamAddress)
} else if ps.Streams[ProtocolHeartbeatPartner] != nil && ps.Streams[ProtocolHeartbeatPartner][pid] != nil {
for _, pids := range ps.Streams {
if pids[pid] != nil {
delete(pids, pid)
}
}
}
}
}
return nil
}
if peers, err := ps.searchPeer(fmt.Sprintf("%v", peer.PARTNER.EnumIndex())); err != nil {
return err
} else {
for _, p := range peers {
for _, protocol := range protocols {
ad, err := pp.AddrInfoFromString(p.StreamAddress)
if err != nil {
continue
}
ps.write(action, p.GetID(), ad, dt, user, payload, protocol, true)
}
}
ks := []protocol.ID{}
for k := range protocolsPartners {
ks = append(ks, k)
}
ps.PublishesCommon(dt, user, &dbs.Filters{ // filter by like name, short_description, description, owner, url if no filters are provided
And: map[string][]dbs.Filter{
"relation": {{Operator: dbs.EQUAL.String(), Value: peer.PARTNER}},
},
}, payload, ks...)
return nil
}
func (s *StreamService) write(
action tools.PubSubAction,
did string,
peerID *pp.AddrInfo,
dt *tools.DataType,
user string,
payload []byte,
proto protocol.ID,
isAPartner bool) error {
proto protocol.ID) (*common.Stream, error) {
logger := oclib.GetLogger()
name := action.String() + "#" + peerID.ID.String()
if dt != nil {
name = action.String() + "." + (*dt).String() + "#" + peerID.ID.String()
var err error
pts := map[protocol.ID]*common.ProtocolInfo{}
for k, v := range protocols {
pts[k] = v
}
s.mu.Lock()
defer s.mu.Unlock()
if s.Streams[proto] == nil {
s.Streams[proto] = map[pp.ID]*common.Stream{}
for k, v := range protocolsPartners {
pts[k] = v
}
// should create a very temp stream
if s.Streams, err = common.TempStream(s.Host, *peerID, proto, did, s.Streams, pts, &s.Mu); err != nil {
return nil, errors.New("no stream available for protocol " + fmt.Sprintf("%v", proto) + " from PID " + peerID.ID.String())
if s.Streams[proto][peerID.ID] == nil {
// should create a very temp stream
ctxTTL, err := context.WithTimeout(context.Background(), 60*time.Second)
if err == nil {
if isAPartner {
ctxTTL = context.Background()
}
if s.Host.Network().Connectedness(peerID.ID) != network.Connected {
_ = s.Host.Connect(ctxTTL, *peerID)
str, err := s.Host.NewStream(ctxTTL, peerID.ID, ProtocolHeartbeatPartner)
if err == nil {
s.Streams[ProtocolHeartbeatPartner][peerID.ID] = &common.Stream{
DID: did,
Stream: str,
Expiry: time.Now().UTC().Add(5 * time.Second),
}
str2, err := s.Host.NewStream(ctxTTL, peerID.ID, proto)
if err == nil {
s.Streams[proto][peerID.ID] = &common.Stream{
DID: did,
Stream: str2,
Expiry: time.Now().UTC().Add(5 * time.Second),
}
}
}
}
}
return errors.New("no stream available for protocol " + fmt.Sprintf("%v", proto) + " from PID " + peerID.ID.String())
}
stream := s.Streams[proto][peerID.ID]
enc := json.NewEncoder(stream.Stream)
evt := common.NewEvent(name, peerID.ID.String(), dt, user, payload)
if err := enc.Encode(evt); err != nil {
evt := common.NewEvent(string(proto), peerID.ID.String(), dt, user, payload)
fmt.Println("SEND EVENT ", evt.From, evt.DataType, evt.Timestamp)
if err := json.NewEncoder(stream.Stream).Encode(evt); err != nil {
stream.Stream.Close()
logger.Err(err)
return nil
return stream, nil
}
return nil
return stream, nil
}

View File

@@ -19,20 +19,35 @@ import (
"github.com/libp2p/go-libp2p/core/network"
pp "github.com/libp2p/go-libp2p/core/peer"
"github.com/libp2p/go-libp2p/core/protocol"
ma "github.com/multiformats/go-multiaddr"
)
const ProtocolConsidersResource = "/opencloud/resource/considers/1.0"
const ProtocolMinioConfigResource = "/opencloud/minio/config/1.0"
const ProtocolAdmiraltyConfigResource = "/opencloud/admiralty/config/1.0"
const ProtocolSearchResource = "/opencloud/resource/search/1.0"
const ProtocolCreateResource = "/opencloud/resource/create/1.0"
const ProtocolUpdateResource = "/opencloud/resource/update/1.0"
const ProtocolDeleteResource = "/opencloud/resource/delete/1.0"
const ProtocolSendPlanner = "/opencloud/resource/planner/1.0"
const ProtocolVerifyResource = "/opencloud/resource/verify/1.0"
const ProtocolHeartbeatPartner = "/opencloud/resource/heartbeat/partner/1.0"
var protocols = []protocol.ID{
ProtocolSearchResource,
ProtocolCreateResource,
ProtocolUpdateResource,
ProtocolDeleteResource,
var protocols = map[protocol.ID]*common.ProtocolInfo{
ProtocolConsidersResource: {WaitResponse: false, TTL: 3 * time.Second},
ProtocolSendPlanner: {WaitResponse: true, TTL: 24 * time.Hour},
ProtocolSearchResource: {WaitResponse: true, TTL: 1 * time.Minute},
ProtocolVerifyResource: {WaitResponse: true, TTL: 1 * time.Minute},
ProtocolMinioConfigResource: {WaitResponse: true, TTL: 1 * time.Minute},
ProtocolAdmiraltyConfigResource: {WaitResponse: true, TTL: 1 * time.Minute},
}
var protocolsPartners = map[protocol.ID]*common.ProtocolInfo{
ProtocolCreateResource: {TTL: 3 * time.Second},
ProtocolUpdateResource: {TTL: 3 * time.Second},
ProtocolDeleteResource: {TTL: 3 * time.Second},
}
type StreamService struct {
@@ -41,7 +56,7 @@ type StreamService struct {
Node common.DiscoveryPeer
Streams common.ProtocolStream
maxNodesConn int
mu sync.Mutex
Mu sync.RWMutex
// Stream map[protocol.ID]map[pp.ID]*daemons.Stream
}
@@ -56,117 +71,123 @@ func InitStream(ctx context.Context, h host.Host, key pp.ID, maxNode int, node c
}
logger.Info().Msg("handle to partner heartbeat protocol...")
service.Host.SetStreamHandler(ProtocolHeartbeatPartner, service.HandlePartnerHeartbeat)
for proto := range protocols {
service.Host.SetStreamHandler(proto, service.HandleResponse)
}
logger.Info().Msg("connect to partners...")
service.connectToPartners() // we set up a stream
go service.StartGC(30 * time.Second)
go service.StartGC(8 * time.Second)
return service, nil
}
func (s *StreamService) HandlePartnerHeartbeat(stream network.Stream) {
pid, hb, err := common.CheckHeartbeat(s.Host, stream, s.maxNodesConn)
if err != nil {
return
func (s *StreamService) HandleResponse(stream network.Stream) {
s.Mu.Lock()
stream.Protocol()
if s.Streams[stream.Protocol()] == nil {
s.Streams[stream.Protocol()] = map[pp.ID]*common.Stream{}
}
s.mu.Lock()
defer s.mu.Unlock()
expiry := 1 * time.Minute
if protocols[stream.Protocol()] != nil {
expiry = protocols[stream.Protocol()].TTL
} else if protocolsPartners[stream.Protocol()] != nil {
expiry = protocolsPartners[stream.Protocol()].TTL
}
s.Streams[stream.Protocol()][stream.Conn().RemotePeer()] = &common.Stream{
Stream: stream,
Expiry: time.Now().UTC().Add(expiry + 1*time.Minute),
}
s.Mu.Unlock()
go s.readLoop(s.Streams[stream.Protocol()][stream.Conn().RemotePeer()],
stream.Conn().RemotePeer(),
stream.Protocol(), protocols[stream.Protocol()])
}
func (s *StreamService) HandlePartnerHeartbeat(stream network.Stream) {
s.Mu.Lock()
if s.Streams[ProtocolHeartbeatPartner] == nil {
s.Streams[ProtocolHeartbeatPartner] = map[pp.ID]*common.Stream{}
}
streams := s.Streams[ProtocolHeartbeatPartner]
streamsAnonym := map[pp.ID]common.HeartBeatStreamed{}
for k, v := range streams {
streamsAnonym[k] = v
}
s.Mu.Unlock()
pid, hb, err := common.CheckHeartbeat(s.Host, stream, json.NewDecoder(stream), streamsAnonym, &s.Mu, s.maxNodesConn)
if err != nil {
return
}
s.Mu.Lock()
defer s.Mu.Unlock()
// if record already seen update last seen
if rec, ok := streams[*pid]; ok {
rec.DID = hb.DID
rec.Expiry = time.Now().UTC().Add(2 * time.Minute)
rec.Expiry = time.Now().UTC().Add(10 * time.Second)
} else { // if not in stream ?
pid := stream.Conn().RemotePeer()
ai, err := pp.AddrInfoFromP2pAddr(stream.Conn().RemoteMultiaddr())
val, err := stream.Conn().RemoteMultiaddr().ValueForProtocol(ma.P_IP4)
if err == nil {
s.ConnectToPartner(pid, ai)
s.ConnectToPartner(val)
}
}
go s.StartGC(30 * time.Second)
// GC is already running via InitStream — starting a new ticker goroutine on
// every heartbeat would leak an unbounded number of goroutines.
}
func (s *StreamService) connectToPartners() error {
peers, err := s.searchPeer(fmt.Sprintf("%v", peer.PARTNER.EnumIndex()))
if err != nil {
return err
}
for _, p := range peers {
ad, err := pp.AddrInfoFromString(p.StreamAddress)
if err != nil {
continue
}
pid, err := pp.Decode(p.PeerID)
if err != nil {
continue
}
s.ConnectToPartner(pid, ad)
// heartbeat your partner.
}
for _, proto := range protocols {
logger := oclib.GetLogger()
for proto, info := range protocolsPartners {
f := func(ss network.Stream) {
if s.Streams[proto] == nil {
s.Streams[proto] = map[pp.ID]*common.Stream{}
}
s.Streams[proto][ss.Conn().RemotePeer()] = &common.Stream{
Stream: ss,
Expiry: time.Now().UTC().Add(2 * time.Minute),
Expiry: time.Now().UTC().Add(10 * time.Second),
}
s.readLoop(s.Streams[proto][ss.Conn().RemotePeer()])
go s.readLoop(s.Streams[proto][ss.Conn().RemotePeer()], ss.Conn().RemotePeer(), proto, info)
}
fmt.Println("SetStreamHandler", proto)
logger.Info().Msg("SetStreamHandler " + string(proto))
s.Host.SetStreamHandler(proto, f)
}
// TODO if handle... from partner then HeartBeat back
peers, err := s.searchPeer(fmt.Sprintf("%v", peer.PARTNER.EnumIndex()))
if err != nil {
logger.Err(err)
return err
}
for _, p := range peers {
s.ConnectToPartner(p.StreamAddress)
}
return nil
}
func (s *StreamService) ConnectToPartner(pid pp.ID, ad *pp.AddrInfo) {
func (s *StreamService) ConnectToPartner(address string) {
logger := oclib.GetLogger()
for _, proto := range protocols {
f := func(ss network.Stream) {
if s.Streams[proto] == nil {
s.Streams[proto] = map[pp.ID]*common.Stream{}
}
s.Streams[proto][pid] = &common.Stream{
Stream: ss,
Expiry: time.Now().UTC().Add(2 * time.Minute),
}
s.readLoop(s.Streams[proto][pid])
}
if s.Host.Network().Connectedness(ad.ID) != network.Connected {
if err := s.Host.Connect(context.Background(), *ad); err != nil {
logger.Err(err)
continue
}
}
s.Streams = common.AddStreamProtocol(nil, s.Streams, s.Host, proto, pid, s.Key, false, &f)
if ad, err := pp.AddrInfoFromString(address); err == nil {
logger.Info().Msg("Connect to Partner " + ProtocolHeartbeatPartner + " " + address)
common.SendHeartbeat(context.Background(), ProtocolHeartbeatPartner, conf.GetConfig().Name,
s.Host, s.Streams, map[string]*pp.AddrInfo{address: ad}, nil, 20*time.Second)
}
common.SendHeartbeat(context.Background(), ProtocolHeartbeatPartner, conf.GetConfig().Name,
s.Host, s.Streams, []*pp.AddrInfo{ad}, 20*time.Second)
}
func (s *StreamService) searchPeer(search string) ([]*peer.Peer, error) {
/* TODO FOR TEST ONLY A VARS THAT DEFINE ADDRESS... deserialize */
ps := []*peer.Peer{}
if conf.GetConfig().PeerIDS != "" {
for _, peerID := range strings.Split(conf.GetConfig().PeerIDS, ",") {
ppID := strings.Split(peerID, ":")
ppID := strings.Split(peerID, "/")
ps = append(ps, &peer.Peer{
AbstractObject: utils.AbstractObject{
UUID: uuid.New().String(),
Name: ppID[1],
},
PeerID: ppID[1],
StreamAddress: "/ip4/127.0.0.1/tcp/" + ppID[0] + "/p2p/" + ppID[1],
State: peer.ONLINE,
PeerID: ppID[len(ppID)-1],
StreamAddress: peerID,
Relation: peer.PARTNER,
})
}
}
access := oclib.NewRequestAdmin(oclib.LibDataEnum(oclib.PEER), nil)
peers := access.Search(nil, search, false)
for _, p := range peers.Data {
@@ -194,8 +215,8 @@ func (s *StreamService) StartGC(interval time.Duration) {
}
func (s *StreamService) gc() {
s.mu.Lock()
defer s.mu.Unlock()
s.Mu.Lock()
defer s.Mu.Unlock()
now := time.Now().UTC()
if s.Streams[ProtocolHeartbeatPartner] == nil {
@@ -214,15 +235,33 @@ func (s *StreamService) gc() {
}
}
func (ps *StreamService) readLoop(s *common.Stream) {
func (ps *StreamService) readLoop(s *common.Stream, id pp.ID, proto protocol.ID, protocolInfo *common.ProtocolInfo) {
defer s.Stream.Close()
defer func() {
ps.Mu.Lock()
defer ps.Mu.Unlock()
delete(ps.Streams[proto], id)
}()
loop := true
if !protocolInfo.PersistantStream && !protocolInfo.WaitResponse { // 2 sec is enough... to wait a response
time.AfterFunc(2*time.Second, func() {
loop = false
})
}
for {
if !loop {
break
}
var evt common.Event
if err := json.NewDecoder(s.Stream).Decode(&evt); err != nil {
s.Stream.Close()
continue
// Any decode error (EOF, reset, malformed JSON) terminates the loop;
// continuing on a dead/closed stream creates an infinite spin.
return
}
ps.handleEvent(evt.Type, &evt)
if protocolInfo.WaitResponse && !protocolInfo.PersistantStream {
break
}
}
}

33
demo-discovery.sh Executable file
View File

@@ -0,0 +1,33 @@
#!/bin/bash
IMAGE_BASE_NAME="oc-discovery"
DOCKERFILE_PATH="."
docker network create \
--subnet=172.40.0.0/24 \
discovery
for i in $(seq ${1:-0} ${2:-3}); do
NUM=$((i + 1))
PORT=$((4000 + $NUM))
IMAGE_NAME="${IMAGE_BASE_NAME}:${NUM}"
echo "▶ Building image ${IMAGE_NAME} with CONF_NUM=${NUM}"
docker build \
--build-arg CONF_NUM=${NUM} \
-t "${IMAGE_BASE_NAME}_${NUM}" \
${DOCKERFILE_PATH}
docker kill "${IMAGE_BASE_NAME}_${NUM}" | true
docker rm "${IMAGE_BASE_NAME}_${NUM}" | true
echo "▶ Running container ${IMAGE_NAME} on port ${PORT}:${PORT}"
docker run -d \
--network="${3:-oc}" \
-p ${PORT}:${PORT} \
--name "${IMAGE_BASE_NAME}_${NUM}" \
"${IMAGE_BASE_NAME}_${NUM}"
docker network connect --ip "172.40.0.${NUM}" discovery "${IMAGE_BASE_NAME}_${NUM}"
done

View File

@@ -1,10 +0,0 @@
{
"port": 8080,
"redisurl":"localhost:6379",
"redispassword":"",
"zincurl":"http://localhost:4080",
"zinclogin":"admin",
"zincpassword":"admin",
"identityfile":"/app/identity.json",
"defaultpeers":"/app/peers.json"
}

View File

@@ -1,10 +0,0 @@
{
"port": 8080,
"redisurl":"localhost:6379",
"redispassword":"",
"zincurl":"http://localhost:4080",
"zinclogin":"admin",
"zincpassword":"admin",
"identityfile":"/app/identity.json",
"defaultpeers":"/app/peers.json"
}

6
docker_discovery1.json Normal file
View File

@@ -0,0 +1,6 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "indexer"
}

10
docker_discovery10.json Normal file
View File

@@ -0,0 +1,10 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "node",
"NODE_ENDPOINT_PORT": 4010,
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu",
"MIN_INDEXER": 2,
"PEER_IDS": "/ip4/172.40.0.9/tcp/4009/p2p/12D3KooWGnQfKwX9E4umCPE8dUKZuig4vw5BndDowRLEbGmcZyta"
}

8
docker_discovery2.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "indexer",
"NODE_ENDPOINT_PORT": 4002,
"INDEXER_ADDRESSES": "/ip4/172.40.0.1/tcp/4001/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
}

8
docker_discovery3.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "node",
"NODE_ENDPOINT_PORT": 4003,
"INDEXER_ADDRESSES": "/ip4/172.40.0.2/tcp/4002/p2p/12D3KooWC3GNStak8KCYtJq11Dxiq45EJV53z1ZvKetMcZBeBX6u"
}

9
docker_discovery4.json Normal file
View File

@@ -0,0 +1,9 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "node",
"NODE_ENDPOINT_PORT": 4004,
"INDEXER_ADDRESSES": "/ip4/172.40.0.1/tcp/4001/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu",
"PEER_IDS": "/ip4/172.40.0.3/tcp/4003/p2p/12D3KooWBh9kZrekBAE5G33q4jCLNRAzygem3gP1mMdK8mhoCTaw"
}

7
docker_discovery5.json Normal file
View File

@@ -0,0 +1,7 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "native-indexer",
"NODE_ENDPOINT_PORT": 4005
}

8
docker_discovery6.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "native-indexer",
"NODE_ENDPOINT_PORT": 4006,
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
}

8
docker_discovery7.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "indexer",
"NODE_ENDPOINT_PORT": 4007,
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.6/tcp/4006/p2p/12D3KooWC3GNStak8KCYtJq11Dxiq45EJV53z1ZvKetMcZBeBX6u"
}

8
docker_discovery8.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "indexer",
"NODE_ENDPOINT_PORT": 4008,
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
}

8
docker_discovery9.json Normal file
View File

@@ -0,0 +1,8 @@
{
"MONGO_URL":"mongodb://mongo:27017/",
"MONGO_DATABASE":"DC_myDC",
"NATS_URL": "nats://nats:4222",
"NODE_MODE": "node",
"NODE_ENDPOINT_PORT": 4009,
"NATIVE_INDEXER_ADDRESSES": "/ip4/172.40.0.6/tcp/4006/p2p/12D3KooWC3GNStak8KCYtJq11Dxiq45EJV53z1ZvKetMcZBeBX6u,/ip4/172.40.0.5/tcp/4005/p2p/12D3KooWGn3j4XqTSrjJDGGpTQERdDV5TPZdhQp87rAUnvQssvQu"
}

View File

@@ -0,0 +1,56 @@
sequenceDiagram
title Node Initialization — Pair A (InitNode)
participant MainA as main (Pair A)
participant NodeA as Node A
participant libp2pA as libp2p (Pair A)
participant DBA as DB Pair A (oc-lib)
participant NATSA as NATS A
participant IndexerA as Indexer (partagé)
participant StreamA as StreamService A
participant PubSubA as PubSubService A
MainA->>NodeA: InitNode(isNode, isIndexer, isNativeIndexer)
NodeA->>NodeA: LoadKeyFromFilePrivate() → priv
NodeA->>NodeA: LoadPSKFromFile() → psk
NodeA->>libp2pA: New(PrivateNetwork(psk), Identity(priv), ListenAddr:4001)
libp2pA-->>NodeA: host A (PeerID_A)
Note over NodeA: isNode == true
NodeA->>libp2pA: NewGossipSub(ctx, host)
libp2pA-->>NodeA: ps (GossipSub)
NodeA->>IndexerA: ConnectToIndexers → SendHeartbeat /opencloud/heartbeat/1.0
Note over IndexerA: Heartbeat long-lived établi<br/>Score qualité calculé (bw + uptime + diversité)
IndexerA-->>NodeA: OK
NodeA->>NodeA: claimInfo(name, hostname)
NodeA->>IndexerA: TempStream /opencloud/record/publish/1.0
NodeA->>IndexerA: json.Encode(PeerRecord A signé)
IndexerA->>IndexerA: DHT.PutValue("/node/"+DID_A, record)
NodeA->>DBA: NewRequestAdmin(PEER).Search(SELF)
DBA-->>NodeA: peer A local (ou UUID généré)
NodeA->>NodeA: StartGC(30s) — GC sur StreamRecords
NodeA->>StreamA: InitStream(ctx, host, PeerID_A, 1000, nodeA)
StreamA->>StreamA: SetStreamHandler(heartbeat/partner, search, planner, ...)
StreamA->>DBA: Search(PEER, PARTNER) → liste partenaires
DBA-->>StreamA: [] (aucun partenaire au démarrage)
StreamA-->>NodeA: StreamService A
NodeA->>PubSubA: InitPubSub(ctx, host, ps, nodeA, streamA)
PubSubA->>PubSubA: subscribeEvents(PB_SEARCH, timeout=-1)
PubSubA-->>NodeA: PubSubService A
NodeA->>NodeA: SubscribeToSearch(ps, callback)
Note over NodeA: callback: GetPeerRecord(evt.From)<br/>→ StreamService.SendResponse
NodeA->>NATSA: ListenNATS(nodeA)
Note over NATSA: Enregistre handlers:<br/>CREATE_RESOURCE, PROPALGATION_EVENT
NodeA-->>MainA: *Node A prêt

View File

@@ -0,0 +1,58 @@
@startuml
title Node Initialization — Pair A (InitNode)
participant "main (Pair A)" as MainA
participant "Node A" as NodeA
participant "libp2p (Pair A)" as libp2pA
participant "DB Pair A (oc-lib)" as DBA
participant "NATS A" as NATSA
participant "Indexer (partagé)" as IndexerA
participant "StreamService A" as StreamA
participant "PubSubService A" as PubSubA
MainA -> NodeA: InitNode(isNode, isIndexer, isNativeIndexer)
NodeA -> NodeA: LoadKeyFromFilePrivate() → priv
NodeA -> NodeA: LoadPSKFromFile() → psk
NodeA -> libp2pA: New(PrivateNetwork(psk), Identity(priv), ListenAddr:4001)
libp2pA --> NodeA: host A (PeerID_A)
note over NodeA: isNode == true
NodeA -> libp2pA: NewGossipSub(ctx, host)
libp2pA --> NodeA: ps (GossipSub)
NodeA -> IndexerA: ConnectToIndexers → SendHeartbeat /opencloud/heartbeat/1.0
note over IndexerA: Heartbeat long-lived établi\nScore qualité calculé (bw + uptime + diversité)
IndexerA --> NodeA: OK
NodeA -> NodeA: claimInfo(name, hostname)
NodeA -> IndexerA: TempStream /opencloud/record/publish/1.0
NodeA -> IndexerA: json.Encode(PeerRecord A signé)
IndexerA -> IndexerA: DHT.PutValue("/node/"+DID_A, record)
NodeA -> DBA: NewRequestAdmin(PEER).Search(SELF)
DBA --> NodeA: peer A local (ou UUID généré)
NodeA -> NodeA: StartGC(30s) — GC sur StreamRecords
NodeA -> StreamA: InitStream(ctx, host, PeerID_A, 1000, nodeA)
StreamA -> StreamA: SetStreamHandler(heartbeat/partner, search, planner, ...)
StreamA -> DBA: Search(PEER, PARTNER) → liste partenaires
DBA --> StreamA: [] (aucun partenaire au démarrage)
StreamA --> NodeA: StreamService A
NodeA -> PubSubA: InitPubSub(ctx, host, ps, nodeA, streamA)
PubSubA -> PubSubA: subscribeEvents(PB_SEARCH, timeout=-1)
PubSubA --> NodeA: PubSubService A
NodeA -> NodeA: SubscribeToSearch(ps, callback)
note over NodeA: callback: GetPeerRecord(evt.From)\n→ StreamService.SendResponse
NodeA -> NATSA: ListenNATS(nodeA)
note over NATSA: Enregistre handlers:\nCREATE_RESOURCE, PROPALGATION_EVENT
NodeA --> MainA: *Node A prêt
@enduml

View File

@@ -0,0 +1,38 @@
sequenceDiagram
title Node Claim — Pair A publie son PeerRecord (claimInfo + publishPeerRecord)
participant DBA as DB Pair A (oc-lib)
participant NodeA as Node A
participant IndexerA as Indexer (partagé)
participant DHT as DHT Kademlia
participant NATSA as NATS A
NodeA->>DBA: NewRequestAdmin(PEER).Search(SELF)
DBA-->>NodeA: existing peer (DID_A) ou nouveau UUID
NodeA->>NodeA: LoadKeyFromFilePrivate() → priv A
NodeA->>NodeA: LoadKeyFromFilePublic() → pub A
NodeA->>NodeA: crypto.MarshalPublicKey(pub A) → pubBytes
NodeA->>NodeA: Build PeerRecord A {<br/> Name, DID, PubKey,<br/> PeerID: PeerID_A,<br/> APIUrl: hostname,<br/> StreamAddress: /ip4/.../tcp/4001/p2p/PeerID_A,<br/> NATSAddress, WalletAddress<br/>}
NodeA->>NodeA: sha256(json(rec)) → hash
NodeA->>NodeA: priv.Sign(hash) → signature
NodeA->>NodeA: rec.ExpiryDate = now + 150s
loop Pour chaque StaticIndexer (Indexer A, B, …)
NodeA->>IndexerA: TempStream /opencloud/record/publish/1.0
NodeA->>IndexerA: json.Encode(PeerRecord A signé)
IndexerA->>IndexerA: Verify signature
IndexerA->>IndexerA: Check heartbeat stream actif pour PeerID_A
IndexerA->>DHT: PutValue("/node/"+DID_A, PeerRecord A)
DHT-->>IndexerA: ok
end
NodeA->>NodeA: rec.ExtractPeer(DID_A, DID_A, pub A)
NodeA->>NATSA: SetNATSPub(CREATE_RESOURCE, {PEER, Peer A JSON})
NATSA->>DBA: Upsert Peer A (SearchAttr: peer_id)
DBA-->>NATSA: ok
NodeA-->>NodeA: *peer.Peer A (SELF)

View File

@@ -0,0 +1,40 @@
@startuml
title Node Claim — Pair A publie son PeerRecord (claimInfo + publishPeerRecord)
participant "DB Pair A (oc-lib)" as DBA
participant "Node A" as NodeA
participant "Indexer (partagé)" as IndexerA
participant "DHT Kademlia" as DHT
participant "NATS A" as NATSA
NodeA -> DBA: NewRequestAdmin(PEER).Search(SELF)
DBA --> NodeA: existing peer (DID_A) ou nouveau UUID
NodeA -> NodeA: LoadKeyFromFilePrivate() → priv A
NodeA -> NodeA: LoadKeyFromFilePublic() → pub A
NodeA -> NodeA: crypto.MarshalPublicKey(pub A) → pubBytes
NodeA -> NodeA: Build PeerRecord A {\n Name, DID, PubKey,\n PeerID: PeerID_A,\n APIUrl: hostname,\n StreamAddress: /ip4/.../tcp/4001/p2p/PeerID_A,\n NATSAddress, WalletAddress\n}
NodeA -> NodeA: sha256(json(rec)) → hash
NodeA -> NodeA: priv.Sign(hash) → signature
NodeA -> NodeA: rec.ExpiryDate = now + 150s
loop Pour chaque StaticIndexer (Indexer A, B, ...)
NodeA -> IndexerA: TempStream /opencloud/record/publish/1.0
NodeA -> IndexerA: json.Encode(PeerRecord A signé)
IndexerA -> IndexerA: Verify signature
IndexerA -> IndexerA: Check heartbeat stream actif pour PeerID_A
IndexerA -> DHT: PutValue("/node/"+DID_A, PeerRecord A)
DHT --> IndexerA: ok
end
NodeA -> NodeA: rec.ExtractPeer(DID_A, DID_A, pub A)
NodeA -> NATSA: SetNATSPub(CREATE_RESOURCE, {PEER, Peer A JSON})
NATSA -> DBA: Upsert Peer A (SearchAttr: peer_id)
DBA --> NATSA: ok
NodeA --> NodeA: *peer.Peer A (SELF)
@enduml

View File

@@ -0,0 +1,47 @@
sequenceDiagram
title Indexer — Heartbeat double (Pair A + Pair B → Indexer partagé)
participant NodeA as Node A
participant NodeB as Node B
participant Indexer as IndexerService (partagé)
Note over NodeA,NodeB: Chaque pair tick toutes les 20s
par Pair A heartbeat
NodeA->>Indexer: NewStream /opencloud/heartbeat/1.0
NodeA->>Indexer: json.Encode(Heartbeat A {Name, DID_A, PeerID_A, IndexersBinded})
Indexer->>Indexer: CheckHeartbeat(host, stream, streams, mu, maxNodes)
Note over Indexer: len(peers) < maxNodes ?
Indexer->>Indexer: getBandwidthChallenge(5122048 bytes, stream)
Indexer->>NodeA: Write(random payload)
NodeA->>Indexer: Echo(same payload)
Indexer->>Indexer: Mesure round-trip → Mbps A
Indexer->>Indexer: getDiversityRate(host, IndexersBinded_A)
Note over Indexer: /24 subnet diversity des indexeurs liés
Indexer->>Indexer: ComputeIndexerScore(uptimeA%, MbpsA%, diversityA%)
Note over Indexer: Score = 0.4×uptime + 0.4×bpms + 0.2×diversity
alt Score A < 75
Indexer->>NodeA: (close stream)
else Score A ≥ 75
Indexer->>Indexer: StreamRecord[PeerID_A] = {DID_A, Heartbeat, UptimeTracker}
end
and Pair B heartbeat
NodeB->>Indexer: NewStream /opencloud/heartbeat/1.0
NodeB->>Indexer: json.Encode(Heartbeat B {Name, DID_B, PeerID_B, IndexersBinded})
Indexer->>Indexer: CheckHeartbeat → getBandwidthChallenge
Indexer->>NodeB: Write(random payload)
NodeB->>Indexer: Echo(same payload)
Indexer->>Indexer: ComputeIndexerScore(uptimeB%, MbpsB%, diversityB%)
alt Score B ≥ 75
Indexer->>Indexer: StreamRecord[PeerID_B] = {DID_B, Heartbeat, UptimeTracker}
end
end
Note over Indexer: Les deux pairs sont désormais<br/>enregistrés avec leurs streams actifs

View File

@@ -0,0 +1,49 @@
@startuml
title Indexer — Heartbeat double (Pair A + Pair B → Indexer partagé)
participant "Node A" as NodeA
participant "Node B" as NodeB
participant "IndexerService (partagé)" as Indexer
note over NodeA,NodeB: Chaque pair tick toutes les 20s
par Pair A heartbeat
NodeA -> Indexer: NewStream /opencloud/heartbeat/1.0
NodeA -> Indexer: json.Encode(Heartbeat A {Name, DID_A, PeerID_A, IndexersBinded})
Indexer -> Indexer: CheckHeartbeat(host, stream, streams, mu, maxNodes)
note over Indexer: len(peers) < maxNodes ?
Indexer -> Indexer: getBandwidthChallenge(512-2048 bytes, stream)
Indexer -> NodeA: Write(random payload)
NodeA -> Indexer: Echo(same payload)
Indexer -> Indexer: Mesure round-trip → Mbps A
Indexer -> Indexer: getDiversityRate(host, IndexersBinded_A)
note over Indexer: /24 subnet diversity des indexeurs liés
Indexer -> Indexer: ComputeIndexerScore(uptimeA%, MbpsA%, diversityA%)
note over Indexer: Score = 0.4×uptime + 0.4×bpms + 0.2×diversity
alt Score A < 75
Indexer -> NodeA: (close stream)
else Score A >= 75
Indexer -> Indexer: StreamRecord[PeerID_A] = {DID_A, Heartbeat, UptimeTracker}
end
else Pair B heartbeat
NodeB -> Indexer: NewStream /opencloud/heartbeat/1.0
NodeB -> Indexer: json.Encode(Heartbeat B {Name, DID_B, PeerID_B, IndexersBinded})
Indexer -> Indexer: CheckHeartbeat → getBandwidthChallenge
Indexer -> NodeB: Write(random payload)
NodeB -> Indexer: Echo(same payload)
Indexer -> Indexer: ComputeIndexerScore(uptimeB%, MbpsB%, diversityB%)
alt Score B >= 75
Indexer -> Indexer: StreamRecord[PeerID_B] = {DID_B, Heartbeat, UptimeTracker}
end
end par
note over Indexer: Les deux pairs sont désormais\nenregistrés avec leurs streams actifs
@enduml

View File

@@ -0,0 +1,41 @@
sequenceDiagram
title Indexer — Pair A publie, Pair B publie (handleNodePublish → DHT)
participant NodeA as Node A
participant NodeB as Node B
participant Indexer as IndexerService (partagé)
participant DHT as DHT Kademlia
Note over NodeA: Après claimInfo ou refresh TTL
par Pair A publie son PeerRecord
NodeA->>Indexer: TempStream /opencloud/record/publish/1.0
NodeA->>Indexer: json.Encode(PeerRecord A {DID_A, PeerID_A, PubKey_A, Expiry, Sig_A})
Indexer->>Indexer: Verify sig_A (reconstruit rec minimal, pubKey_A.Verify)
Indexer->>Indexer: Check StreamRecords[Heartbeat][PeerID_A] existe
alt Heartbeat actif pour A
Indexer->>Indexer: StreamRecord A → DID_A, Record=PeerRecord A, LastSeen=now
Indexer->>DHT: PutValue("/node/"+DID_A, PeerRecord A JSON)
DHT-->>Indexer: ok
else Pas de heartbeat
Indexer->>NodeA: (erreur "no heartbeat", stream close)
end
and Pair B publie son PeerRecord
NodeB->>Indexer: TempStream /opencloud/record/publish/1.0
NodeB->>Indexer: json.Encode(PeerRecord B {DID_B, PeerID_B, PubKey_B, Expiry, Sig_B})
Indexer->>Indexer: Verify sig_B
Indexer->>Indexer: Check StreamRecords[Heartbeat][PeerID_B] existe
alt Heartbeat actif pour B
Indexer->>Indexer: StreamRecord B → DID_B, Record=PeerRecord B, LastSeen=now
Indexer->>DHT: PutValue("/node/"+DID_B, PeerRecord B JSON)
DHT-->>Indexer: ok
else Pas de heartbeat
Indexer->>NodeB: (erreur "no heartbeat", stream close)
end
end
Note over DHT: DHT contient maintenant<br/>"/node/DID_A" et "/node/DID_B"

View File

@@ -0,0 +1,43 @@
@startuml
title Indexer — Pair A publie, Pair B publie (handleNodePublish → DHT)
participant "Node A" as NodeA
participant "Node B" as NodeB
participant "IndexerService (partagé)" as Indexer
participant "DHT Kademlia" as DHT
note over NodeA: Après claimInfo ou refresh TTL
par Pair A publie son PeerRecord
NodeA -> Indexer: TempStream /opencloud/record/publish/1.0
NodeA -> Indexer: json.Encode(PeerRecord A {DID_A, PeerID_A, PubKey_A, Expiry, Sig_A})
Indexer -> Indexer: Verify sig_A (reconstruit rec minimal, pubKey_A.Verify)
Indexer -> Indexer: Check StreamRecords[Heartbeat][PeerID_A] existe
alt Heartbeat actif pour A
Indexer -> Indexer: StreamRecord A → DID_A, Record=PeerRecord A, LastSeen=now
Indexer -> DHT: PutValue("/node/"+DID_A, PeerRecord A JSON)
DHT --> Indexer: ok
else Pas de heartbeat
Indexer -> NodeA: (erreur "no heartbeat", stream close)
end
else Pair B publie son PeerRecord
NodeB -> Indexer: TempStream /opencloud/record/publish/1.0
NodeB -> Indexer: json.Encode(PeerRecord B {DID_B, PeerID_B, PubKey_B, Expiry, Sig_B})
Indexer -> Indexer: Verify sig_B
Indexer -> Indexer: Check StreamRecords[Heartbeat][PeerID_B] existe
alt Heartbeat actif pour B
Indexer -> Indexer: StreamRecord B → DID_B, Record=PeerRecord B, LastSeen=now
Indexer -> DHT: PutValue("/node/"+DID_B, PeerRecord B JSON)
DHT --> Indexer: ok
else Pas de heartbeat
Indexer -> NodeB: (erreur "no heartbeat", stream close)
end
end par
note over DHT: DHT contient maintenant\n"/node/DID_A" et "/node/DID_B"
@enduml

View File

@@ -0,0 +1,49 @@
sequenceDiagram
title Indexer — Pair A résout Pair B (GetPeerRecord + handleNodeGet)
participant NATSA as NATS A
participant DBA as DB Pair A (oc-lib)
participant NodeA as Node A
participant Indexer as IndexerService (partagé)
participant DHT as DHT Kademlia
participant NATSA2 as NATS A (retour)
Note over NodeA: Déclenché par : NATS PB_SEARCH PEER<br/>ou callback SubscribeToSearch
NodeA->>DBA: NewRequestAdmin(PEER).Search(DID_B ou PeerID_B)
DBA-->>NodeA: Peer B local (si connu) → résout DID_B + PeerID_B<br/>sinon utilise la valeur brute
loop Pour chaque StaticIndexer
NodeA->>Indexer: TempStream /opencloud/record/get/1.0
NodeA->>Indexer: json.Encode(GetValue{Key: DID_B, PeerID: PeerID_B})
Indexer->>Indexer: key = "/node/" + DID_B
Indexer->>DHT: SearchValue(ctx 10s, "/node/"+DID_B)
DHT-->>Indexer: channel de bytes (PeerRecord B)
loop Pour chaque résultat DHT
Indexer->>Indexer: Unmarshal → PeerRecord B
alt PeerRecord.PeerID == PeerID_B
Indexer->>Indexer: resp.Found=true, resp.Records[PeerID_B]=PeerRecord B
Indexer->>Indexer: StreamRecord B.LastSeen = now (si heartbeat actif)
end
end
Indexer->>NodeA: json.Encode(GetResponse{Found:true, Records:{PeerID_B: PeerRecord B}})
end
loop Pour chaque PeerRecord retourné
NodeA->>NodeA: rec.Verify() → valide signature de B
NodeA->>NodeA: rec.ExtractPeer(ourDID_A, DID_B, pubKey_B)
alt ourDID_A == DID_B (c'est notre propre entrée)
Note over NodeA: Republier pour rafraîchir le TTL
NodeA->>Indexer: publishPeerRecord(rec) [refresh 2 min]
end
NodeA->>NATSA2: SetNATSPub(CREATE_RESOURCE, {PEER, Peer B JSON,<br/>SearchAttr:"peer_id"})
NATSA2->>DBA: Upsert Peer B dans DB A
DBA-->>NATSA2: ok
end
NodeA-->>NodeA: []*peer.Peer → [Peer B]

View File

@@ -0,0 +1,51 @@
@startuml
title Indexer — Pair A résout Pair B (GetPeerRecord + handleNodeGet)
participant "NATS A" as NATSA
participant "DB Pair A (oc-lib)" as DBA
participant "Node A" as NodeA
participant "IndexerService (partagé)" as Indexer
participant "DHT Kademlia" as DHT
participant "NATS A (retour)" as NATSA2
note over NodeA: Déclenché par : NATS PB_SEARCH PEER\nou callback SubscribeToSearch
NodeA -> DBA: NewRequestAdmin(PEER).Search(DID_B ou PeerID_B)
DBA --> NodeA: Peer B local (si connu) → résout DID_B + PeerID_B\nsinon utilise la valeur brute
loop Pour chaque StaticIndexer
NodeA -> Indexer: TempStream /opencloud/record/get/1.0
NodeA -> Indexer: json.Encode(GetValue{Key: DID_B, PeerID: PeerID_B})
Indexer -> Indexer: key = "/node/" + DID_B
Indexer -> DHT: SearchValue(ctx 10s, "/node/"+DID_B)
DHT --> Indexer: channel de bytes (PeerRecord B)
loop Pour chaque résultat DHT
Indexer -> Indexer: Unmarshal → PeerRecord B
alt PeerRecord.PeerID == PeerID_B
Indexer -> Indexer: resp.Found=true, resp.Records[PeerID_B]=PeerRecord B
Indexer -> Indexer: StreamRecord B.LastSeen = now (si heartbeat actif)
end
end
Indexer -> NodeA: json.Encode(GetResponse{Found:true, Records:{PeerID_B: PeerRecord B}})
end
loop Pour chaque PeerRecord retourné
NodeA -> NodeA: rec.Verify() → valide signature de B
NodeA -> NodeA: rec.ExtractPeer(ourDID_A, DID_B, pubKey_B)
alt ourDID_A == DID_B (c'est notre propre entrée)
note over NodeA: Republier pour rafraîchir le TTL
NodeA -> Indexer: publishPeerRecord(rec) [refresh 2 min]
end
NodeA -> NATSA2: SetNATSPub(CREATE_RESOURCE, {PEER, Peer B JSON,\nSearchAttr:"peer_id"})
NATSA2 -> DBA: Upsert Peer B dans DB A
DBA --> NATSA2: ok
end
NodeA --> NodeA: []*peer.Peer → [Peer B]
@enduml

View File

@@ -0,0 +1,39 @@
sequenceDiagram
title Native Indexer — Enregistrement d'un Indexer auprès du Native
participant IndexerA as Indexer A
participant IndexerB as Indexer B
participant Native as Native Indexer (partagé)
participant DHT as DHT Kademlia
participant PubSub as GossipSub (oc-indexer-registry)
Note over IndexerA,IndexerB: Au démarrage + toutes les 60s (StartNativeRegistration)
par Indexer A s'enregistre
IndexerA->>IndexerA: Build IndexerRegistration{PeerID_A, Addr_A}
IndexerA->>Native: NewStream /opencloud/native/subscribe/1.0
IndexerA->>Native: json.Encode(IndexerRegistration A)
Native->>Native: Decode → liveIndexerEntry{PeerID_A, Addr_A, ExpiresAt=now+66s}
Native->>DHT: PutValue("/indexer/"+PeerID_A, entry A)
DHT-->>Native: ok
Native->>Native: liveIndexers[PeerID_A] = entry A
Native->>Native: knownPeerIDs[PeerID_A] = {}
Native->>PubSub: topic.Publish([]byte(PeerID_A))
Note over PubSub: Gossipé aux autres Natives<br/>→ ils ajoutent PeerID_A à knownPeerIDs<br/>→ refresh DHT au prochain tick 30s
IndexerA->>Native: stream.Close()
and Indexer B s'enregistre
IndexerB->>IndexerB: Build IndexerRegistration{PeerID_B, Addr_B}
IndexerB->>Native: NewStream /opencloud/native/subscribe/1.0
IndexerB->>Native: json.Encode(IndexerRegistration B)
Native->>Native: Decode → liveIndexerEntry{PeerID_B, Addr_B, ExpiresAt=now+66s}
Native->>DHT: PutValue("/indexer/"+PeerID_B, entry B)
DHT-->>Native: ok
Native->>Native: liveIndexers[PeerID_B] = entry B
Native->>PubSub: topic.Publish([]byte(PeerID_B))
IndexerB->>Native: stream.Close()
end
Note over Native: liveIndexers = {PeerID_A: entryA, PeerID_B: entryB}

View File

@@ -0,0 +1,41 @@
@startuml
title Native Indexer — Enregistrement d'un Indexer auprès du Native
participant "Indexer A" as IndexerA
participant "Indexer B" as IndexerB
participant "Native Indexer (partagé)" as Native
participant "DHT Kademlia" as DHT
participant "GossipSub (oc-indexer-registry)" as PubSub
note over IndexerA,IndexerB: Au démarrage + toutes les 60s (StartNativeRegistration)
par Indexer A s'enregistre
IndexerA -> IndexerA: Build IndexerRegistration{PeerID_A, Addr_A}
IndexerA -> Native: NewStream /opencloud/native/subscribe/1.0
IndexerA -> Native: json.Encode(IndexerRegistration A)
Native -> Native: Decode → liveIndexerEntry{PeerID_A, Addr_A, ExpiresAt=now+66s}
Native -> DHT: PutValue("/indexer/"+PeerID_A, entry A)
DHT --> Native: ok
Native -> Native: liveIndexers[PeerID_A] = entry A
Native -> Native: knownPeerIDs[PeerID_A] = {}
Native -> PubSub: topic.Publish([]byte(PeerID_A))
note over PubSub: Gossipé aux autres Natives\n→ ils ajoutent PeerID_A à knownPeerIDs\n→ refresh DHT au prochain tick 30s
IndexerA -> Native: stream.Close()
else Indexer B s'enregistre
IndexerB -> IndexerB: Build IndexerRegistration{PeerID_B, Addr_B}
IndexerB -> Native: NewStream /opencloud/native/subscribe/1.0
IndexerB -> Native: json.Encode(IndexerRegistration B)
Native -> Native: Decode → liveIndexerEntry{PeerID_B, Addr_B, ExpiresAt=now+66s}
Native -> DHT: PutValue("/indexer/"+PeerID_B, entry B)
DHT --> Native: ok
Native -> Native: liveIndexers[PeerID_B] = entry B
Native -> PubSub: topic.Publish([]byte(PeerID_B))
IndexerB -> Native: stream.Close()
end par
note over Native: liveIndexers = {PeerID_A: entryA, PeerID_B: entryB}
@enduml

View File

@@ -0,0 +1,60 @@
sequenceDiagram
title Native — ConnectToNatives + Consensus (Pair A bootstrap)
participant NodeA as Node A
participant Native1 as Native #1 (primary)
participant Native2 as Native #2
participant NativeN as Native #N
participant DHT as DHT Kademlia
Note over NodeA: NativeIndexerAddresses configuré<br/>Appelé pendant InitNode → ConnectToIndexers
NodeA->>NodeA: Parse NativeIndexerAddresses → StaticNatives
NodeA->>Native1: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
NodeA->>Native2: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
%% Étape 1 : récupérer un pool initial
NodeA->>Native1: Connect + NewStream /opencloud/native/indexers/1.0
NodeA->>Native1: json.Encode(GetIndexersRequest{Count: maxIndexer})
Native1->>Native1: reachableLiveIndexers()
Note over Native1: Filtre liveIndexers par TTL<br/>ping chaque candidat (PeerIsAlive)
alt Aucun indexer connu par Native1
Native1->>Native1: selfDelegate(NodeA.PeerID, resp)
Note over Native1: IsSelfFallback=true<br/>Indexers=[native1 addr]
Native1->>NodeA: GetIndexersResponse{IsSelfFallback:true, Indexers:[native1]}
NodeA->>NodeA: StaticIndexers[native1] = native1
Note over NodeA: Pas de consensus — native1 utilisé directement comme indexeur
else Indexers disponibles
Native1->>NodeA: GetIndexersResponse{Indexers:[Addr_IndexerA, Addr_IndexerB, ...]}
%% Étape 2 : consensus
Note over NodeA: clientSideConsensus(candidates)
par Requêtes consensus parallèles
NodeA->>Native1: NewStream /opencloud/native/consensus/1.0
NodeA->>Native1: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
Native1->>Native1: Croiser avec liveIndexers propres
Native1->>NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
and
NodeA->>Native2: NewStream /opencloud/native/consensus/1.0
NodeA->>Native2: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
Native2->>Native2: Croiser avec liveIndexers propres
Native2->>NodeA: ConsensusResponse{Trusted:[Addr_A], Suggestions:[Addr_C]}
and
NodeA->>NativeN: NewStream /opencloud/native/consensus/1.0
NodeA->>NativeN: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
NativeN->>NativeN: Croiser avec liveIndexers propres
NativeN->>NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
end
Note over NodeA: Aggrège les votes (timeout 4s)<br/>Addr_A → 3/3 votes → confirmé ✓<br/>Addr_B → 2/3 votes → confirmé ✓
alt confirmed < maxIndexer && suggestions disponibles
Note over NodeA: Round 2 — rechallenge avec suggestions
NodeA->>NodeA: clientSideConsensus(confirmed + sample(suggestions))
end
NodeA->>NodeA: StaticIndexers = adresses confirmées à majorité
end

View File

@@ -0,0 +1,62 @@
@startuml
title Native — ConnectToNatives + Consensus (Pair A bootstrap)
participant "Node A" as NodeA
participant "Native #1 (primary)" as Native1
participant "Native #2" as Native2
participant "Native #N" as NativeN
participant "DHT Kademlia" as DHT
note over NodeA: NativeIndexerAddresses configuré\nAppelé pendant InitNode → ConnectToIndexers
NodeA -> NodeA: Parse NativeIndexerAddresses → StaticNatives
NodeA -> Native1: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
NodeA -> Native2: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
' Étape 1 : récupérer un pool initial
NodeA -> Native1: Connect + NewStream /opencloud/native/indexers/1.0
NodeA -> Native1: json.Encode(GetIndexersRequest{Count: maxIndexer})
Native1 -> Native1: reachableLiveIndexers()
note over Native1: Filtre liveIndexers par TTL\nping chaque candidat (PeerIsAlive)
alt Aucun indexer connu par Native1
Native1 -> Native1: selfDelegate(NodeA.PeerID, resp)
note over Native1: IsSelfFallback=true\nIndexers=[native1 addr]
Native1 -> NodeA: GetIndexersResponse{IsSelfFallback:true, Indexers:[native1]}
NodeA -> NodeA: StaticIndexers[native1] = native1
note over NodeA: Pas de consensus — native1 utilisé directement comme indexeur
else Indexers disponibles
Native1 -> NodeA: GetIndexersResponse{Indexers:[Addr_IndexerA, Addr_IndexerB, ...]}
' Étape 2 : consensus
note over NodeA: clientSideConsensus(candidates)
par Requêtes consensus parallèles
NodeA -> Native1: NewStream /opencloud/native/consensus/1.0
NodeA -> Native1: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
Native1 -> Native1: Croiser avec liveIndexers propres
Native1 -> NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
else
NodeA -> Native2: NewStream /opencloud/native/consensus/1.0
NodeA -> Native2: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
Native2 -> Native2: Croiser avec liveIndexers propres
Native2 -> NodeA: ConsensusResponse{Trusted:[Addr_A], Suggestions:[Addr_C]}
else
NodeA -> NativeN: NewStream /opencloud/native/consensus/1.0
NodeA -> NativeN: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
NativeN -> NativeN: Croiser avec liveIndexers propres
NativeN -> NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
end par
note over NodeA: Aggrège les votes (timeout 4s)\nAddr_A → 3/3 votes → confirmé ✓\nAddr_B → 2/3 votes → confirmé ✓
alt confirmed < maxIndexer && suggestions disponibles
note over NodeA: Round 2 — rechallenge avec suggestions
NodeA -> NodeA: clientSideConsensus(confirmed + sample(suggestions))
end
NodeA -> NodeA: StaticIndexers = adresses confirmées à majorité
end
@enduml

View File

@@ -0,0 +1,49 @@
sequenceDiagram
title NATS — CREATE_RESOURCE : Pair A découvre Pair B et établit le stream
participant AppA as App Pair A (oc-api)
participant NATSA as NATS A
participant NodeA as Node A
participant StreamA as StreamService A
participant NodeB as Node B
participant StreamB as StreamService B
participant DBA as DB Pair A (oc-lib)
Note over AppA: Pair B vient d'être découvert<br/>(via indexeur ou manuel)
AppA->>NATSA: Publish(CREATE_RESOURCE, {<br/> FromApp:"oc-api",<br/> Datatype:PEER,<br/> Payload: Peer B {StreamAddress_B, Relation:PARTNER}<br/>})
NATSA->>NodeA: ListenNATS callback → CREATE_RESOURCE
NodeA->>NodeA: resp.FromApp == "oc-discovery" ? → Non, continuer
NodeA->>NodeA: json.Unmarshal(payload) → peer.Peer B
NodeA->>NodeA: pp.AddrInfoFromString(B.StreamAddress)
Note over NodeA: ad_B = {ID: PeerID_B, Addrs: [...]}
NodeA->>StreamA: Mu.Lock()
alt peer B.Relation == PARTNER
NodeA->>StreamA: ConnectToPartner(B.StreamAddress)
StreamA->>StreamA: AddrInfoFromString(B.StreamAddress) → ad_B
StreamA->>NodeB: Connect (libp2p)
StreamA->>NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
StreamA->>NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A})
NodeB->>StreamB: HandlePartnerHeartbeat(stream)
StreamB->>StreamB: CheckHeartbeat → bandwidth challenge
StreamB->>StreamA: Echo(payload)
StreamB->>StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
StreamA->>StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
Note over StreamA,StreamB: Stream partner long-lived établi<br/>dans les deux sens
else peer B.Relation != PARTNER (révocation / blacklist)
Note over NodeA: Supprimer tous les streams vers Pair B
loop Pour chaque protocole dans Streams
NodeA->>StreamA: streams[proto][PeerID_B].Stream.Close()
NodeA->>StreamA: delete(streams[proto], PeerID_B)
end
end
NodeA->>StreamA: Mu.Unlock()
NodeA->>DBA: (pas de write direct ici — géré par l'app source)

View File

@@ -0,0 +1,50 @@
@startuml
title NATS — CREATE_RESOURCE : Pair A découvre Pair B et établit le stream
participant "App Pair A (oc-api)" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "StreamService A" as StreamA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "DB Pair A (oc-lib)" as DBA
note over AppA: Pair B vient d'être découvert\n(via indexeur ou manuel)
AppA -> NATSA: Publish(CREATE_RESOURCE, {\n FromApp:"oc-api",\n Datatype:PEER,\n Payload: Peer B {StreamAddress_B, Relation:PARTNER}\n})
NATSA -> NodeA: ListenNATS callback → CREATE_RESOURCE
NodeA -> NodeA: resp.FromApp == "oc-discovery" ? → Non, continuer
NodeA -> NodeA: json.Unmarshal(payload) → peer.Peer B
NodeA -> NodeA: pp.AddrInfoFromString(B.StreamAddress)
note over NodeA: ad_B = {ID: PeerID_B, Addrs: [...]}
NodeA -> StreamA: Mu.Lock()
alt peer B.Relation == PARTNER
NodeA -> StreamA: ConnectToPartner(B.StreamAddress)
StreamA -> StreamA: AddrInfoFromString(B.StreamAddress) → ad_B
StreamA -> NodeB: Connect (libp2p)
StreamA -> NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
StreamA -> NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A})
NodeB -> StreamB: HandlePartnerHeartbeat(stream)
StreamB -> StreamB: CheckHeartbeat → bandwidth challenge
StreamB -> StreamA: Echo(payload)
StreamB -> StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
StreamA -> StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
note over StreamA,StreamB: Stream partner long-lived établi\ndans les deux sens
else peer B.Relation != PARTNER (révocation / blacklist)
note over NodeA: Supprimer tous les streams vers Pair B
loop Pour chaque protocole dans Streams
NodeA -> StreamA: streams[proto][PeerID_B].Stream.Close()
NodeA -> StreamA: delete(streams[proto], PeerID_B)
end
end
NodeA -> StreamA: Mu.Unlock()
NodeA -> DBA: (pas de write direct ici — géré par l'app source)
@enduml

View File

@@ -0,0 +1,66 @@
sequenceDiagram
title NATS — PROPALGATION_EVENT : Pair A propage vers Pair B
participant AppA as App Pair A
participant NATSA as NATS A
participant NodeA as Node A
participant StreamA as StreamService A
participant NodeB as Node B
participant NATSB as NATS B
participant DBB as DB Pair B (oc-lib)
AppA->>NATSA: Publish(PROPALGATION_EVENT, {Action, DataType, Payload})
NATSA->>NodeA: ListenNATS callback → PROPALGATION_EVENT
NodeA->>NodeA: resp.FromApp != "oc-discovery" ? → continuer
NodeA->>NodeA: json.Unmarshal → PropalgationMessage{Action, DataType, Payload}
alt Action == PB_DELETE
NodeA->>StreamA: ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
StreamA->>StreamA: searchPeer(PARTNER) → [Pair B, ...]
StreamA->>NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
Note over NodeB: /opencloud/resource/delete/1.0
NodeB->>NodeB: handleEventFromPartner(evt, ProtocolDeleteResource)
NodeB->>NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
NATSB->>DBB: Supprimer ressource dans DB B
else Action == PB_UPDATE (via ProtocolUpdateResource)
NodeA->>StreamA: ToPartnerPublishEvent(PB_UPDATE, dt, user, payload)
StreamA->>NodeB: write → /opencloud/resource/update/1.0
NodeB->>NATSB: SetNATSPub(CREATE_RESOURCE, {DataType, resource JSON})
NATSB->>DBB: Upsert ressource dans DB B
else Action == PB_CONSIDERS + WORKFLOW_EXECUTION
NodeA->>NodeA: Unmarshal → executionConsidersPayload{PeerIDs:[PeerID_B, ...]}
loop Pour chaque peer_id cible
NodeA->>StreamA: PublishCommon(dt, user, PeerID_B, ProtocolConsidersResource, payload)
StreamA->>NodeB: write → /opencloud/resource/considers/1.0
NodeB->>NodeB: passConsidering(evt)
NodeB->>NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_CONSIDERS, dt, payload})
NATSB->>DBB: (traité par oc-workflow sur NATS B)
end
else Action == PB_PLANNER (broadcast)
NodeA->>NodeA: Unmarshal → {peer_id: nil, ...payload}
loop Pour chaque stream ProtocolSendPlanner ouvert
NodeA->>StreamA: PublishCommon(nil, user, pid, ProtocolSendPlanner, payload)
StreamA->>NodeB: write → /opencloud/resource/planner/1.0
end
else Action == PB_CLOSE_PLANNER
NodeA->>NodeA: Unmarshal → {peer_id: PeerID_B}
NodeA->>StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
NodeA->>StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
else Action == PB_SEARCH + DataType == PEER
NodeA->>NodeA: Unmarshal → {search: "..."}
NodeA->>NodeA: GetPeerRecord(ctx, search)
Note over NodeA: Résolution via DB A + Indexer + DHT
NodeA->>NATSA: SetNATSPub(SEARCH_EVENT, {PEER, PeerRecord JSON})
NATSA->>NATSA: (AppA reçoit le résultat)
else Action == PB_SEARCH + autre DataType
NodeA->>NodeA: Unmarshal → {type:"all"|"known"|"partner", search:"..."}
NodeA->>NodeA: PubSubService.SearchPublishEvent(ctx, dt, type, user, search)
Note over NodeA: Voir diagrammes 10 et 11
end

View File

@@ -0,0 +1,68 @@
@startuml
title NATS — PROPALGATION_EVENT : Pair A propage vers Pair B
participant "App Pair A" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "StreamService A" as StreamA
participant "Node B" as NodeB
participant "NATS B" as NATSB
participant "DB Pair B (oc-lib)" as DBB
AppA -> NATSA: Publish(PROPALGATION_EVENT, {Action, DataType, Payload})
NATSA -> NodeA: ListenNATS callback → PROPALGATION_EVENT
NodeA -> NodeA: resp.FromApp != "oc-discovery" ? → continuer
NodeA -> NodeA: json.Unmarshal → PropalgationMessage{Action, DataType, Payload}
alt Action == PB_DELETE
NodeA -> StreamA: ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
StreamA -> StreamA: searchPeer(PARTNER) → [Pair B, ...]
StreamA -> NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
note over NodeB: /opencloud/resource/delete/1.0
NodeB -> NodeB: handleEventFromPartner(evt, ProtocolDeleteResource)
NodeB -> NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
NATSB -> DBB: Supprimer ressource dans DB B
else Action == PB_UPDATE (via ProtocolUpdateResource)
NodeA -> StreamA: ToPartnerPublishEvent(PB_UPDATE, dt, user, payload)
StreamA -> NodeB: write → /opencloud/resource/update/1.0
NodeB -> NATSB: SetNATSPub(CREATE_RESOURCE, {DataType, resource JSON})
NATSB -> DBB: Upsert ressource dans DB B
else Action == PB_CONSIDERS + WORKFLOW_EXECUTION
NodeA -> NodeA: Unmarshal → executionConsidersPayload{PeerIDs:[PeerID_B, ...]}
loop Pour chaque peer_id cible
NodeA -> StreamA: PublishCommon(dt, user, PeerID_B, ProtocolConsidersResource, payload)
StreamA -> NodeB: write → /opencloud/resource/considers/1.0
NodeB -> NodeB: passConsidering(evt)
NodeB -> NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_CONSIDERS, dt, payload})
NATSB -> DBB: (traité par oc-workflow sur NATS B)
end
else Action == PB_PLANNER (broadcast)
NodeA -> NodeA: Unmarshal → {peer_id: nil, ...payload}
loop Pour chaque stream ProtocolSendPlanner ouvert
NodeA -> StreamA: PublishCommon(nil, user, pid, ProtocolSendPlanner, payload)
StreamA -> NodeB: write → /opencloud/resource/planner/1.0
end
else Action == PB_CLOSE_PLANNER
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B}
NodeA -> StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
NodeA -> StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
else Action == PB_SEARCH + DataType == PEER
NodeA -> NodeA: Unmarshal → {search: "..."}
NodeA -> NodeA: GetPeerRecord(ctx, search)
note over NodeA: Résolution via DB A + Indexer + DHT
NodeA -> NATSA: SetNATSPub(SEARCH_EVENT, {PEER, PeerRecord JSON})
NATSA -> NATSA: (AppA reçoit le résultat)
else Action == PB_SEARCH + autre DataType
NodeA -> NodeA: Unmarshal → {type:"all"|"known"|"partner", search:"..."}
NodeA -> NodeA: PubSubService.SearchPublishEvent(ctx, dt, type, user, search)
note over NodeA: Voir diagrammes 10 et 11
end
@enduml

View File

@@ -0,0 +1,52 @@
sequenceDiagram
title PubSub — Recherche gossip globale (type "all") : Pair A cherche, Pair B répond
participant AppA as App Pair A
participant NATSA as NATS A
participant NodeA as Node A
participant PubSubA as PubSubService A
participant GossipSub as GossipSub libp2p (mesh)
participant NodeB as Node B
participant PubSubB as PubSubService B
participant DBB as DB Pair B (oc-lib)
participant StreamB as StreamService B
participant StreamA as StreamService A
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"all", search:"gpu"})
NATSA->>NodeA: ListenNATS → PB_SEARCH (type "all")
NodeA->>PubSubA: SearchPublishEvent(ctx, dt, "all", user, "gpu")
PubSubA->>PubSubA: publishEvent(PB_SEARCH, user, {search:"gpu"})
PubSubA->>PubSubA: GenerateNodeID() → from = DID_A
PubSubA->>PubSubA: priv_A.Sign(event body) → sig
PubSubA->>PubSubA: Build Event{Type:"search", From:DID_A, Payload:{search:"gpu"}, Sig}
PubSubA->>GossipSub: topic.Join("search")
PubSubA->>GossipSub: topic.Publish(ctx, json(Event))
GossipSub-->>NodeB: Message propagé (gossip mesh)
NodeB->>PubSubB: subscribeEvents écoute topic "search#"
PubSubB->>PubSubB: json.Unmarshal → Event{From: DID_A}
PubSubB->>NodeB: GetPeerRecord(ctx, DID_A)
Note over NodeB: Résolution Pair A via DB B ou Indexer
NodeB-->>PubSubB: Peer A {PublicKey_A, Relation, ...}
PubSubB->>PubSubB: event.Verify(Peer A) → valide sig_A
PubSubB->>PubSubB: handleEventSearch(ctx, evt, PB_SEARCH)
PubSubB->>StreamB: SendResponse(Peer A, evt)
StreamB->>DBB: Search(COMPUTE + STORAGE + ..., filters{creator=self, access=PUBLIC OR partnerships[PeerID_A]}, search="gpu")
DBB-->>StreamB: [Resource1, Resource2, ...]
loop Pour chaque ressource matchée
StreamB->>StreamB: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
StreamB->>StreamA: NewStream /opencloud/resource/search/1.0
StreamB->>StreamA: json.Encode(Event{Type:search, From:DID_B, DataType, Payload:resource})
end
StreamA->>StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
StreamA->>StreamA: retrieveResponse(evt)
StreamA->>NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
NATSA->>AppA: Résultats de recherche de Pair B

View File

@@ -0,0 +1,54 @@
@startuml
title PubSub — Recherche gossip globale (type "all") : Pair A cherche, Pair B répond
participant "App Pair A" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "PubSubService A" as PubSubA
participant "GossipSub libp2p (mesh)" as GossipSub
participant "Node B" as NodeB
participant "PubSubService B" as PubSubB
participant "DB Pair B (oc-lib)" as DBB
participant "StreamService B" as StreamB
participant "StreamService A" as StreamA
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"all", search:"gpu"})
NATSA -> NodeA: ListenNATS → PB_SEARCH (type "all")
NodeA -> PubSubA: SearchPublishEvent(ctx, dt, "all", user, "gpu")
PubSubA -> PubSubA: publishEvent(PB_SEARCH, user, {search:"gpu"})
PubSubA -> PubSubA: GenerateNodeID() → from = DID_A
PubSubA -> PubSubA: priv_A.Sign(event body) → sig
PubSubA -> PubSubA: Build Event{Type:"search", From:DID_A, Payload:{search:"gpu"}, Sig}
PubSubA -> GossipSub: topic.Join("search")
PubSubA -> GossipSub: topic.Publish(ctx, json(Event))
GossipSub --> NodeB: Message propagé (gossip mesh)
NodeB -> PubSubB: subscribeEvents écoute topic "search#"
PubSubB -> PubSubB: json.Unmarshal → Event{From: DID_A}
PubSubB -> NodeB: GetPeerRecord(ctx, DID_A)
note over NodeB: Résolution Pair A via DB B ou Indexer
NodeB --> PubSubB: Peer A {PublicKey_A, Relation, ...}
PubSubB -> PubSubB: event.Verify(Peer A) → valide sig_A
PubSubB -> PubSubB: handleEventSearch(ctx, evt, PB_SEARCH)
PubSubB -> StreamB: SendResponse(Peer A, evt)
StreamB -> DBB: Search(COMPUTE + STORAGE + ..., filters{creator=self, access=PUBLIC OR partnerships[PeerID_A]}, search="gpu")
DBB --> StreamB: [Resource1, Resource2, ...]
loop Pour chaque ressource matchée
StreamB -> StreamB: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
StreamB -> StreamA: NewStream /opencloud/resource/search/1.0
StreamB -> StreamA: json.Encode(Event{Type:search, From:DID_B, DataType, Payload:resource})
end
StreamA -> StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
StreamA -> StreamA: retrieveResponse(evt)
StreamA -> NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
NATSA -> AppA: Résultats de recherche de Pair B
@enduml

View File

@@ -0,0 +1,52 @@
sequenceDiagram
title Stream — Recherche directe (type "known"/"partner") : Pair A → Pair B
participant AppA as App Pair A
participant NATSA as NATS A
participant NodeA as Node A
participant PubSubA as PubSubService A
participant StreamA as StreamService A
participant DBA as DB Pair A (oc-lib)
participant NodeB as Node B
participant StreamB as StreamService B
participant DBB as DB Pair B (oc-lib)
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"partner", search:"gpu"})
NATSA->>NodeA: ListenNATS → PB_SEARCH (type "partner")
NodeA->>PubSubA: SearchPublishEvent(ctx, dt, "partner", user, "gpu")
PubSubA->>StreamA: SearchPartnersPublishEvent(dt, user, "gpu")
StreamA->>DBA: Search(PEER, PARTNER) + PeerIDS config
DBA-->>StreamA: [Peer B, ...]
loop Pour chaque pair partenaire (Pair B)
StreamA->>StreamA: json.Marshal({search:"gpu"}) → payload
StreamA->>StreamA: write(PeerID_B, addr_B, dt, user, payload, ProtocolSearchResource)
StreamA->>NodeB: TempStream /opencloud/resource/search/1.0
StreamA->>NodeB: json.Encode(Event{Type:search, From:DID_A, DataType, Payload:{search:"gpu"}})
NodeB->>StreamB: HandleResponse(stream) → readLoop
StreamB->>StreamB: handleEvent(ProtocolSearchResource, evt)
StreamB->>StreamB: handleEventFromPartner(evt, ProtocolSearchResource)
alt evt.DataType == -1 (toutes ressources)
StreamB->>DBA: Search(PEER, evt.From=DID_A)
Note over StreamB: Résolution locale ou via GetPeerRecord
StreamB->>StreamB: SendResponse(Peer A, evt)
StreamB->>DBB: Search(ALL_RESOURCES, filter{creator=B + public OR partner A + search:"gpu"})
DBB-->>StreamB: [Resource1, Resource2, ...]
else evt.DataType spécifié
StreamB->>DBB: Search(DataType, filter{creator=B + access + search:"gpu"})
DBB-->>StreamB: [Resource1, ...]
end
loop Pour chaque ressource
StreamB->>StreamA: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
StreamA->>StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
StreamA->>StreamA: retrieveResponse(evt)
StreamA->>NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
NATSA->>AppA: Résultat de Pair B
end
end
Note over NATSA,DBA: Optionnel: App A persiste<br/>les ressources découvertes dans DB A

View File

@@ -0,0 +1,54 @@
@startuml
title Stream — Recherche directe (type "known"/"partner") : Pair A → Pair B
participant "App Pair A" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "PubSubService A" as PubSubA
participant "StreamService A" as StreamA
participant "DB Pair A (oc-lib)" as DBA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "DB Pair B (oc-lib)" as DBB
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"partner", search:"gpu"})
NATSA -> NodeA: ListenNATS → PB_SEARCH (type "partner")
NodeA -> PubSubA: SearchPublishEvent(ctx, dt, "partner", user, "gpu")
PubSubA -> StreamA: SearchPartnersPublishEvent(dt, user, "gpu")
StreamA -> DBA: Search(PEER, PARTNER) + PeerIDS config
DBA --> StreamA: [Peer B, ...]
loop Pour chaque pair partenaire (Pair B)
StreamA -> StreamA: json.Marshal({search:"gpu"}) → payload
StreamA -> StreamA: write(PeerID_B, addr_B, dt, user, payload, ProtocolSearchResource)
StreamA -> NodeB: TempStream /opencloud/resource/search/1.0
StreamA -> NodeB: json.Encode(Event{Type:search, From:DID_A, DataType, Payload:{search:"gpu"}})
NodeB -> StreamB: HandleResponse(stream) → readLoop
StreamB -> StreamB: handleEvent(ProtocolSearchResource, evt)
StreamB -> StreamB: handleEventFromPartner(evt, ProtocolSearchResource)
alt evt.DataType == -1 (toutes ressources)
StreamB -> DBA: Search(PEER, evt.From=DID_A)
note over StreamB: Résolution locale ou via GetPeerRecord
StreamB -> StreamB: SendResponse(Peer A, evt)
StreamB -> DBB: Search(ALL_RESOURCES, filter{creator=B + public OR partner A + search:"gpu"})
DBB --> StreamB: [Resource1, Resource2, ...]
else evt.DataType spécifié
StreamB -> DBB: Search(DataType, filter{creator=B + access + search:"gpu"})
DBB --> StreamB: [Resource1, ...]
end
loop Pour chaque ressource
StreamB -> StreamA: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
StreamA -> StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
StreamA -> StreamA: retrieveResponse(evt)
StreamA -> NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
NATSA -> AppA: Résultat de Pair B
end
end
note over NATSA,DBA: Optionnel: App A persiste\nles ressources découvertes dans DB A
@enduml

View File

@@ -0,0 +1,58 @@
sequenceDiagram
title Stream — Partner Heartbeat et propagation CRUD Pair A ↔ Pair B
participant DBA as DB Pair A (oc-lib)
participant StreamA as StreamService A
participant NodeA as Node A
participant NodeB as Node B
participant StreamB as StreamService B
participant NATSB as NATS B
participant DBB as DB Pair B (oc-lib)
participant NATSA as NATS A
Note over StreamA: Démarrage → connectToPartners()
StreamA->>DBA: Search(PEER, PARTNER) + PeerIDS config
DBA-->>StreamA: [Peer B, ...]
StreamA->>NodeB: Connect (libp2p)
StreamA->>NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
StreamA->>NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A, IndexersBinded_A})
NodeB->>StreamB: HandlePartnerHeartbeat(stream)
StreamB->>StreamB: CheckHeartbeat → bandwidth challenge
StreamB->>StreamA: Echo(payload)
StreamB->>StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
StreamA->>StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
Note over StreamA,StreamB: Stream partner long-lived établi<br/>GC toutes les 8s (StreamService A)<br/>GC toutes les 30s (StreamService B)
Note over NATSA: Pair A reçoit PROPALGATION_EVENT{PB_DELETE, dt:"storage", payload:res}
NATSA->>NodeA: ListenNATS → ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
NodeA->>StreamA: ToPartnerPublishEvent(ctx, PB_DELETE, dt_storage, user, payload)
alt dt == PEER (mise à jour relation partenaire)
StreamA->>StreamA: json.Unmarshal → peer.Peer B updated
alt B.Relation == PARTNER
StreamA->>NodeB: ConnectToPartner(B.StreamAddress)
Note over StreamA,NodeB: Reconnexion heartbeat si relation upgrade
else B.Relation != PARTNER
loop Tous les protocoles
StreamA->>StreamA: delete(streams[proto][PeerID_B])
StreamA->>NodeB: (streams fermés)
end
end
else dt != PEER (ressource ordinaire)
StreamA->>DBA: Search(PEER, PARTNER) → [Pair B, ...]
loop Pour chaque protocole partner (Create/Update/Delete)
StreamA->>NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
Note over NodeB: /opencloud/resource/delete/1.0
NodeB->>StreamB: HandleResponse → readLoop
StreamB->>StreamB: handleEventFromPartner(evt, ProtocolDeleteResource)
StreamB->>NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
NATSB->>DBB: Supprimer ressource dans DB B
end
end

View File

@@ -0,0 +1,60 @@
@startuml
title Stream — Partner Heartbeat et propagation CRUD Pair A ↔ Pair B
participant "DB Pair A (oc-lib)" as DBA
participant "StreamService A" as StreamA
participant "Node A" as NodeA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "NATS B" as NATSB
participant "DB Pair B (oc-lib)" as DBB
participant "NATS A" as NATSA
note over StreamA: Démarrage → connectToPartners()
StreamA -> DBA: Search(PEER, PARTNER) + PeerIDS config
DBA --> StreamA: [Peer B, ...]
StreamA -> NodeB: Connect (libp2p)
StreamA -> NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
StreamA -> NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A, IndexersBinded_A})
NodeB -> StreamB: HandlePartnerHeartbeat(stream)
StreamB -> StreamB: CheckHeartbeat → bandwidth challenge
StreamB -> StreamA: Echo(payload)
StreamB -> StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
StreamA -> StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
note over StreamA,StreamB: Stream partner long-lived établi\nGC toutes les 8s (StreamService A)\nGC toutes les 30s (StreamService B)
note over NATSA: Pair A reçoit PROPALGATION_EVENT{PB_DELETE, dt:"storage", payload:res}
NATSA -> NodeA: ListenNATS → ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
NodeA -> StreamA: ToPartnerPublishEvent(ctx, PB_DELETE, dt_storage, user, payload)
alt dt == PEER (mise à jour relation partenaire)
StreamA -> StreamA: json.Unmarshal → peer.Peer B updated
alt B.Relation == PARTNER
StreamA -> NodeB: ConnectToPartner(B.StreamAddress)
note over StreamA,NodeB: Reconnexion heartbeat si relation upgrade
else B.Relation != PARTNER
loop Tous les protocoles
StreamA -> StreamA: delete(streams[proto][PeerID_B])
StreamA -> NodeB: (streams fermés)
end
end
else dt != PEER (ressource ordinaire)
StreamA -> DBA: Search(PEER, PARTNER) → [Pair B, ...]
loop Pour chaque protocole partner (Create/Update/Delete)
StreamA -> NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
note over NodeB: /opencloud/resource/delete/1.0
NodeB -> StreamB: HandleResponse → readLoop
StreamB -> StreamB: handleEventFromPartner(evt, ProtocolDeleteResource)
StreamB -> NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
NATSB -> DBB: Supprimer ressource dans DB B
end
end
@enduml

View File

@@ -0,0 +1,49 @@
sequenceDiagram
title Stream — Session Planner : Pair A demande le plan de Pair B
participant AppA as App Pair A (oc-booking)
participant NATSA as NATS A
participant NodeA as Node A
participant StreamA as StreamService A
participant NodeB as Node B
participant StreamB as StreamService B
participant DBB as DB Pair B (oc-lib)
participant NATSB as NATS B
%% Ouverture session planner
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_PLANNER, peer_id:PeerID_B, payload:{}})
NATSA->>NodeA: ListenNATS → PB_PLANNER
NodeA->>NodeA: Unmarshal → {peer_id: PeerID_B, payload: {}}
NodeA->>StreamA: PublishCommon(nil, user, PeerID_B, ProtocolSendPlanner, {})
Note over StreamA: WaitResponse=true, TTL=24h<br/>Stream long-lived vers Pair B
StreamA->>NodeB: TempStream /opencloud/resource/planner/1.0
StreamA->>NodeB: json.Encode(Event{Type:planner, From:DID_A, Payload:{}})
NodeB->>StreamB: HandleResponse → readLoop(ProtocolSendPlanner)
StreamB->>StreamB: handleEvent(ProtocolSendPlanner, evt)
StreamB->>StreamB: sendPlanner(evt)
alt evt.Payload vide (requête initiale)
StreamB->>DBB: planner.GenerateShallow(AdminRequest)
DBB-->>StreamB: plan (shallow booking plan de Pair B)
StreamB->>StreamA: PublishCommon(nil, user, DID_A, ProtocolSendPlanner, planJSON)
StreamA->>NodeA: json.Encode(Event{plan de B})
NodeA->>NATSA: (forwardé à AppA via SEARCH_EVENT ou PLANNER event)
NATSA->>AppA: Plan de Pair B
else evt.Payload non vide (mise à jour planner)
StreamB->>StreamB: m["peer_id"] = evt.From (DID_A)
StreamB->>NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_PLANNER, peer_id:DID_A, payload:plan})
NATSB->>DBB: (oc-booking traite le plan sur NATS B)
end
%% Fermeture session planner
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_CLOSE_PLANNER, peer_id:PeerID_B})
NATSA->>NodeA: ListenNATS → PB_CLOSE_PLANNER
NodeA->>NodeA: Unmarshal → {peer_id: PeerID_B}
NodeA->>StreamA: Mu.Lock()
NodeA->>StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
NodeA->>StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
NodeA->>StreamA: Mu.Unlock()
Note over StreamA,NodeB: Stream planner fermé — session terminée

View File

@@ -0,0 +1,51 @@
@startuml
title Stream — Session Planner : Pair A demande le plan de Pair B
participant "App Pair A (oc-booking)" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "StreamService A" as StreamA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "DB Pair B (oc-lib)" as DBB
participant "NATS B" as NATSB
' Ouverture session planner
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_PLANNER, peer_id:PeerID_B, payload:{}})
NATSA -> NodeA: ListenNATS → PB_PLANNER
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B, payload: {}}
NodeA -> StreamA: PublishCommon(nil, user, PeerID_B, ProtocolSendPlanner, {})
note over StreamA: WaitResponse=true, TTL=24h\nStream long-lived vers Pair B
StreamA -> NodeB: TempStream /opencloud/resource/planner/1.0
StreamA -> NodeB: json.Encode(Event{Type:planner, From:DID_A, Payload:{}})
NodeB -> StreamB: HandleResponse → readLoop(ProtocolSendPlanner)
StreamB -> StreamB: handleEvent(ProtocolSendPlanner, evt)
StreamB -> StreamB: sendPlanner(evt)
alt evt.Payload vide (requête initiale)
StreamB -> DBB: planner.GenerateShallow(AdminRequest)
DBB --> StreamB: plan (shallow booking plan de Pair B)
StreamB -> StreamA: PublishCommon(nil, user, DID_A, ProtocolSendPlanner, planJSON)
StreamA -> NodeA: json.Encode(Event{plan de B})
NodeA -> NATSA: (forwardé à AppA via SEARCH_EVENT ou PLANNER event)
NATSA -> AppA: Plan de Pair B
else evt.Payload non vide (mise à jour planner)
StreamB -> StreamB: m["peer_id"] = evt.From (DID_A)
StreamB -> NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_PLANNER, peer_id:DID_A, payload:plan})
NATSB -> DBB: (oc-booking traite le plan sur NATS B)
end
' Fermeture session planner
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_CLOSE_PLANNER, peer_id:PeerID_B})
NATSA -> NodeA: ListenNATS → PB_CLOSE_PLANNER
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B}
NodeA -> StreamA: Mu.Lock()
NodeA -> StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
NodeA -> StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
NodeA -> StreamA: Mu.Unlock()
note over StreamA,NodeB: Stream planner fermé — session terminée
@enduml

View File

@@ -0,0 +1,59 @@
sequenceDiagram
title Native Indexer — Boucles background (offload, DHT refresh, GC streams)
participant IndexerA as Indexer A (enregistré)
participant IndexerB as Indexer B (enregistré)
participant Native as Native Indexer
participant DHT as DHT Kademlia
participant NodeA as Node A (responsible peer)
Note over Native: runOffloadLoop — toutes les 30s
loop Toutes les 30s
Native->>Native: len(responsiblePeers) > 0 ?
Note over Native: responsiblePeers = peers pour lesquels<br/>le native a fait selfDelegate (aucun indexer dispo)
alt Des responsible peers existent (ex: Node A)
Native->>Native: reachableLiveIndexers()
Note over Native: Filtre liveIndexers par TTL<br/>ping PeerIsAlive pour chaque candidat
alt Indexers A et B maintenant joignables
Native->>Native: responsiblePeers = {} (libère Node A et autres)
Note over Native: Node A se reconnectera<br/>au prochain ConnectToNatives
else Toujours aucun indexer
Note over Native: Node A reste sous la responsabilité du native
end
end
end
Note over Native: refreshIndexersFromDHT — toutes les 30s
loop Toutes les 30s
Native->>Native: Collecter tous les knownPeerIDs<br/>= {PeerID_A, PeerID_B, ...}
loop Pour chaque PeerID connu
Native->>Native: liveIndexers[PeerID] encore frais ?
alt Entrée manquante ou expirée
Native->>DHT: SearchValue(ctx 5s, "/indexer/"+PeerID)
DHT-->>Native: channel de bytes
loop Pour chaque résultat DHT
Native->>Native: Unmarshal → liveIndexerEntry
Native->>Native: Garder le meilleur (ExpiresAt le plus récent, valide)
end
Native->>Native: liveIndexers[PeerID] = best entry
Note over Native: "native: refreshed indexer from DHT"
end
end
end
Note over Native: LongLivedStreamRecordedService GC — toutes les 30s
loop Toutes les 30s
Native->>Native: gc() — lock StreamRecords[Heartbeat]
loop Pour chaque StreamRecord (Indexer A, B, ...)
Native->>Native: now > rec.Expiry ?<br/>OU timeSince(LastSeen) > 2×TTL restant ?
alt Pair périmé (ex: Indexer B disparu)
Native->>Native: Supprimer Indexer B de TOUS les maps de protocoles
Note over Native: Stream heartbeat fermé<br/>liveIndexers[PeerID_B] expirera naturellement
end
end
end
Note over IndexerA: Indexer A continue à heartbeater normalement<br/>et reste dans StreamRecords + liveIndexers

View File

@@ -0,0 +1,61 @@
@startuml
title Native Indexer — Boucles background (offload, DHT refresh, GC streams)
participant "Indexer A (enregistré)" as IndexerA
participant "Indexer B (enregistré)" as IndexerB
participant "Native Indexer" as Native
participant "DHT Kademlia" as DHT
participant "Node A (responsible peer)" as NodeA
note over Native: runOffloadLoop — toutes les 30s
loop Toutes les 30s
Native -> Native: len(responsiblePeers) > 0 ?
note over Native: responsiblePeers = peers pour lesquels\nle native a fait selfDelegate (aucun indexer dispo)
alt Des responsible peers existent (ex: Node A)
Native -> Native: reachableLiveIndexers()
note over Native: Filtre liveIndexers par TTL\nping PeerIsAlive pour chaque candidat
alt Indexers A et B maintenant joignables
Native -> Native: responsiblePeers = {} (libère Node A et autres)
note over Native: Node A se reconnectera\nau prochain ConnectToNatives
else Toujours aucun indexer
note over Native: Node A reste sous la responsabilité du native
end
end
end
note over Native: refreshIndexersFromDHT — toutes les 30s
loop Toutes les 30s
Native -> Native: Collecter tous les knownPeerIDs\n= {PeerID_A, PeerID_B, ...}
loop Pour chaque PeerID connu
Native -> Native: liveIndexers[PeerID] encore frais ?
alt Entrée manquante ou expirée
Native -> DHT: SearchValue(ctx 5s, "/indexer/"+PeerID)
DHT --> Native: channel de bytes
loop Pour chaque résultat DHT
Native -> Native: Unmarshal → liveIndexerEntry
Native -> Native: Garder le meilleur (ExpiresAt le plus récent, valide)
end
Native -> Native: liveIndexers[PeerID] = best entry
note over Native: "native: refreshed indexer from DHT"
end
end
end
note over Native: LongLivedStreamRecordedService GC — toutes les 30s
loop Toutes les 30s
Native -> Native: gc() — lock StreamRecords[Heartbeat]
loop Pour chaque StreamRecord (Indexer A, B, ...)
Native -> Native: now > rec.Expiry ?\nOU timeSince(LastSeen) > 2×TTL restant ?
alt Pair périmé (ex: Indexer B disparu)
Native -> Native: Supprimer Indexer B de TOUS les maps de protocoles
note over Native: Stream heartbeat fermé\nliveIndexers[PeerID_B] expirera naturellement
end
end
end
note over IndexerA: Indexer A continue à heartbeater normalement\net reste dans StreamRecords + liveIndexers
@enduml

43
docs/diagrams/README.md Normal file
View File

@@ -0,0 +1,43 @@
# OC-Discovery — Diagrammes de séquence
Tous les fichiers `.mmd` sont au format [Mermaid](https://mermaid.js.org/).
Rendu possible via VS Code (extension Mermaid Preview), IntelliJ, ou [mermaid.live](https://mermaid.live).
## Vue d'ensemble des diagrammes
| Fichier | Description |
|---------|-------------|
| `01_node_init.mmd` | Initialisation complète d'un Node (libp2p host, GossipSub, indexers, StreamService, PubSubService, NATS) |
| `02_node_claim.mmd` | Enregistrement du nœud auprès des indexeurs (`claimInfo` + `publishPeerRecord`) |
| `03_indexer_heartbeat.mmd` | Protocole heartbeat avec calcul du score qualité (bande passante, uptime, diversité) |
| `04_indexer_publish.mmd` | Publication d'un `PeerRecord` vers l'indexeur → DHT |
| `05_indexer_get.mmd` | Résolution d'un pair via l'indexeur (`GetPeerRecord` + `handleNodeGet` + DHT) |
| `06_native_registration.mmd` | Enregistrement d'un indexeur auprès d'un Native Indexer + gossip PubSub |
| `07_native_get_consensus.mmd` | `ConnectToNatives` : pool d'indexeurs + protocole de consensus (vote majoritaire) |
| `08_nats_create_resource.mmd` | Handler NATS `CREATE_RESOURCE` : connexion/déconnexion d'un partner |
| `09_nats_propagation.mmd` | Handler NATS `PROPALGATION_EVENT` : delete, considers, planner, search |
| `10_pubsub_search.mmd` | Recherche gossip globale (type `"all"`) via GossipSub |
| `11_stream_search.mmd` | Recherche directe par stream (type `"known"` ou `"partner"`) |
| `12_partner_heartbeat.mmd` | Heartbeat partner + propagation CRUD vers les partenaires |
| `13_planner_flow.mmd` | Session planner (ouverture, échange, fermeture) |
| `14_native_offload_gc.mmd` | Boucles background du Native Indexer (offload, DHT refresh, GC) |
## Protocoles libp2p utilisés
| Protocole | Description |
|-----------|-------------|
| `/opencloud/heartbeat/1.0` | Heartbeat node → indexeur (long-lived) |
| `/opencloud/heartbeat/indexer/1.0` | Heartbeat indexeur → native (long-lived) |
| `/opencloud/resource/heartbeat/partner/1.0` | Heartbeat node ↔ partner (long-lived) |
| `/opencloud/record/publish/1.0` | Publication `PeerRecord` vers indexeur |
| `/opencloud/record/get/1.0` | Requête `GetPeerRecord` vers indexeur |
| `/opencloud/native/subscribe/1.0` | Enregistrement indexeur auprès du native |
| `/opencloud/native/indexers/1.0` | Requête de pool d'indexeurs au native |
| `/opencloud/native/consensus/1.0` | Validation de pool d'indexeurs (consensus) |
| `/opencloud/resource/search/1.0` | Recherche de ressources entre peers |
| `/opencloud/resource/create/1.0` | Propagation création ressource vers partner |
| `/opencloud/resource/update/1.0` | Propagation mise à jour ressource vers partner |
| `/opencloud/resource/delete/1.0` | Propagation suppression ressource vers partner |
| `/opencloud/resource/planner/1.0` | Session planner (booking) |
| `/opencloud/resource/verify/1.0` | Vérification signature ressource |
| `/opencloud/resource/considers/1.0` | Transmission d'un "considers" d'exécution |

56
go.mod
View File

@@ -1,28 +1,34 @@
module oc-discovery
go 1.24.6
go 1.25.0
require (
cloud.o-forge.io/core/oc-lib v0.0.0-20260203150531-ef916fe2d995
github.com/beego/beego v1.12.13
github.com/beego/beego/v2 v2.3.8
github.com/go-redis/redis v6.15.9+incompatible
github.com/smartystreets/goconvey v1.7.2
github.com/tidwall/gjson v1.17.3
cloud.o-forge.io/core/oc-lib v0.0.0-20260302152414-542b0b73aba5
github.com/libp2p/go-libp2p v0.47.0
github.com/libp2p/go-libp2p-record v0.3.1
github.com/multiformats/go-multiaddr v0.16.1
)
require (
github.com/beego/beego/v2 v2.3.8 // indirect
github.com/benbjohnson/clock v1.3.5 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/dunglas/httpsfv v1.1.0 // indirect
github.com/emicklei/go-restful/v3 v3.12.2 // indirect
github.com/filecoin-project/go-clock v0.1.0 // indirect
github.com/flynn/noise v1.1.0 // indirect
github.com/fxamacker/cbor/v2 v2.9.0 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.21.0 // indirect
github.com/go-openapi/jsonreference v0.20.2 // indirect
github.com/go-openapi/swag v0.23.0 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/gnostic-models v0.7.0 // indirect
github.com/google/gopacket v1.1.19 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/huin/goupnp v1.3.0 // indirect
github.com/ipfs/boxo v0.35.2 // indirect
@@ -32,29 +38,31 @@ require (
github.com/ipld/go-ipld-prime v0.21.0 // indirect
github.com/jackpal/go-nat-pmp v1.0.2 // indirect
github.com/jbenet/go-temp-err-catcher v0.1.0 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
github.com/koron/go-ssdp v0.0.6 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-cidranger v1.1.0 // indirect
github.com/libp2p/go-flow-metrics v0.3.0 // indirect
github.com/libp2p/go-libp2p v0.47.0 // indirect
github.com/libp2p/go-libp2p-asn-util v0.4.1 // indirect
github.com/libp2p/go-libp2p-kbucket v0.8.0 // indirect
github.com/libp2p/go-libp2p-record v0.3.1 // indirect
github.com/libp2p/go-libp2p-routing-helpers v0.7.5 // indirect
github.com/libp2p/go-msgio v0.3.0 // indirect
github.com/libp2p/go-netroute v0.4.0 // indirect
github.com/libp2p/go-reuseport v0.4.0 // indirect
github.com/libp2p/go-yamux/v5 v5.0.1 // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd // indirect
github.com/miekg/dns v1.1.68 // indirect
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b // indirect
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr v0.16.1 // indirect
github.com/multiformats/go-multiaddr-dns v0.4.1 // indirect
github.com/multiformats/go-multiaddr-fmt v0.1.0 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect
@@ -89,6 +97,7 @@ require (
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 // indirect
github.com/wlynxg/anet v0.0.5 // indirect
github.com/x448/float16 v0.8.4 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/otel v1.39.0 // indirect
go.opentelemetry.io/otel/metric v1.39.0 // indirect
@@ -99,13 +108,28 @@ require (
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.1 // indirect
go.yaml.in/yaml/v2 v2.4.3 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 // indirect
golang.org/x/mod v0.32.0 // indirect
golang.org/x/oauth2 v0.32.0 // indirect
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2 // indirect
golang.org/x/term v0.39.0 // indirect
golang.org/x/time v0.12.0 // indirect
golang.org/x/tools v0.41.0 // indirect
gonum.org/v1/gonum v0.17.0 // indirect
gopkg.in/evanphx/json-patch.v4 v4.13.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
k8s.io/api v0.35.1 // indirect
k8s.io/apimachinery v0.35.1 // indirect
k8s.io/client-go v0.35.1 // indirect
k8s.io/klog/v2 v2.130.1 // indirect
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 // indirect
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 // indirect
lukechampine.com/blake3 v1.4.1 // indirect
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 // indirect
sigs.k8s.io/randfill v1.0.0 // indirect
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 // indirect
sigs.k8s.io/yaml v1.6.0 // indirect
)
require (
@@ -117,13 +141,10 @@ require (
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.27.0 // indirect
github.com/golang/snappy v1.0.0 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/gopherjs/gopherjs v0.0.0-20190430165422-3e4dfb77656c // indirect
github.com/google/uuid v1.6.0
github.com/goraz/onion v0.1.3 // indirect
github.com/hashicorp/golang-lru v1.0.2 // indirect
github.com/jtolds/gls v4.20.0+incompatible // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/kr/text v0.2.0 // indirect
github.com/leodido/go-urn v1.4.0 // indirect
github.com/libp2p/go-libp2p-kad-dht v0.37.1
github.com/libp2p/go-libp2p-pubsub v0.15.0
@@ -139,13 +160,8 @@ require (
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.66.1 // indirect
github.com/prometheus/procfs v0.17.0 // indirect
github.com/robfig/cron v1.2.0 // indirect
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/rs/zerolog v1.34.0 // indirect
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 // indirect
github.com/smartystreets/assertions v1.2.0 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.1 // indirect
github.com/xdg-go/pbkdf2 v1.0.0 // indirect
github.com/xdg-go/scram v1.1.2 // indirect
github.com/xdg-go/stringprep v1.0.4 // indirect

368
go.sum
View File

@@ -1,212 +1,147 @@
cloud.o-forge.io/core/oc-lib v0.0.0-20250108155542-0f4adeea86be h1:1Yf8ihUxXjOEPqcfgtXJpJ/slxBUHhf7AgS7DZI3iUk=
cloud.o-forge.io/core/oc-lib v0.0.0-20250108155542-0f4adeea86be/go.mod h1:ya7Q+zHhaKM+XF6sAJ+avqHEVzaMnFJQih2X3TlTlGo=
cloud.o-forge.io/core/oc-lib v0.0.0-20250603080047-03dea551315b h1:yfXDZ0Pw5xTWstsbZWS+MV7G3ZTSvOCTwWQJWRn4Z5k=
cloud.o-forge.io/core/oc-lib v0.0.0-20250603080047-03dea551315b/go.mod h1:2roQbUpv3a6mTIr5oU1ux31WbN8YucyyQvCQ0FqwbcE=
cloud.o-forge.io/core/oc-lib v0.0.0-20250604083300-387785b40cb0 h1:iEm/Rf9I0OSCcncuFy61YOSZ3jdRlhJ/oLD97Pc2pCQ=
cloud.o-forge.io/core/oc-lib v0.0.0-20250604083300-387785b40cb0/go.mod h1:2roQbUpv3a6mTIr5oU1ux31WbN8YucyyQvCQ0FqwbcE=
cloud.o-forge.io/core/oc-lib v0.0.0-20250704084459-443546027b27 h1:iogk6pV3gybzQDBXMI6Qd/jvSA1h+3oRE+vLl1MRjew=
cloud.o-forge.io/core/oc-lib v0.0.0-20250704084459-443546027b27/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260126120055-055e6c70cdd7 h1:LAK86efqe2HNV1Tkym1TpvzL1Xsj3F0ClsK/snfejD0=
cloud.o-forge.io/core/oc-lib v0.0.0-20260126120055-055e6c70cdd7/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260127143728-3c052bf16572 h1:jrUHgs4DqNWLnLcb5nd4lrJim77+aGkJFACUfMogiu8=
cloud.o-forge.io/core/oc-lib v0.0.0-20260127143728-3c052bf16572/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128140632-d098d253d8e2 h1:B3TO9nXdpGuPXL4X3QFrRMJ1C4zXCQlLh4XR9aSZoKg=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128140632-d098d253d8e2/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128140807-1c9d7b63c0b3 h1:zAT4ZulAaX+l28QdCMvuXh5XQxn+fU8x6YNJ1zmA7+Q=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128140807-1c9d7b63c0b3/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128152242-743f4a6ff742 h1:vGlUqBhj3G5hvskL1NzfecKCUMH8bL3xx7JkLpv/04M=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128152242-743f4a6ff742/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128152919-7911cf29def8 h1:HT1+PP04wu5DcQ5PA3LtSJ5PcWEyL4FlZB62+v9eLWo=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128152919-7911cf29def8/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128154447-d26789d64e33 h1:WdmHeRtEWV3RsXaEe4HnItGNYLFvMNFggfq9/KtPho0=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128154447-d26789d64e33/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128160440-c0d89ea9e1e8 h1:h7VHJktaTT8TxO4ld3Xjw3LzMsivr3m7mzbNxb44zes=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128160440-c0d89ea9e1e8/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128162702-97cf629e27ec h1:/uvrtEt7A5rwqFPHH8yjujlC33HMjQHhWDIK6I08DrA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260128162702-97cf629e27ec/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260129121215-c1519f6b26b8 h1:gvUbTwHnYM0Ezzvoa9ylTt+o1lAhS0U79OogbsZ+Pl8=
cloud.o-forge.io/core/oc-lib v0.0.0-20260129121215-c1519f6b26b8/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260129122033-186ba3e689c7 h1:NRFGRqN+j5g3DrtXMYN5T5XSYICG+OU2DisjBdID3j8=
cloud.o-forge.io/core/oc-lib v0.0.0-20260129122033-186ba3e689c7/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260203074447-30e6c9a6183c h1:c19lIseiUk5Hp+06EowfEbMWH1pK8AC/hvQ4ryWgJtY=
cloud.o-forge.io/core/oc-lib v0.0.0-20260203074447-30e6c9a6183c/go.mod h1:vHWauJsS6ryf7UDqq8hRXoYD5RsONxcFTxeZPOztEuI=
cloud.o-forge.io/core/oc-lib v0.0.0-20260203150123-4258f6b58083 h1:nKiU4AfeX+axS4HkaX8i2PJyhSFfRJvzT+CgIv6Jl2o=
cloud.o-forge.io/core/oc-lib v0.0.0-20260203150123-4258f6b58083/go.mod h1:T0UCxRd8w+qCVVC0NEyDiWIGC5ADwEbQ7hFcvftd4Ks=
cloud.o-forge.io/core/oc-lib v0.0.0-20260203150531-ef916fe2d995 h1:ZDRvnzTTNHgMm5hYmseHdEPqQ6rn/4v+P9f/JIxPaNw=
cloud.o-forge.io/core/oc-lib v0.0.0-20260203150531-ef916fe2d995/go.mod h1:T0UCxRd8w+qCVVC0NEyDiWIGC5ADwEbQ7hFcvftd4Ks=
cloud.o-forge.io/core/oc-lib v0.0.0-20260224130821-ce8ef70516f7 h1:p9uJjMY+QkE4neA+xRmIRtAm9us94EKZqgajDdLOd0Y=
cloud.o-forge.io/core/oc-lib v0.0.0-20260224130821-ce8ef70516f7/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260226084851-959fce48ef6c h1:FTUu9tdEfib6J+fuc7e5wYTe++EIlB70bVNpOeFjnyU=
cloud.o-forge.io/core/oc-lib v0.0.0-20260226084851-959fce48ef6c/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260226085754-f4e2d8057df0 h1:lvrRF4ToIMl/5k1q4AiPEy6ycjwRtOaDhWnQ/LrW1ZA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260226085754-f4e2d8057df0/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260226091217-cb3771c17a31 h1:hvkvJibS9NmImw73j79Ov5VpIYs4WbP4SYGlK/XO82Q=
cloud.o-forge.io/core/oc-lib v0.0.0-20260226091217-cb3771c17a31/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
cloud.o-forge.io/core/oc-lib v0.0.0-20260302152414-542b0b73aba5 h1:h+Fkyj6cfwAirc0QGCBEkZSSrgcyThXswg7ytOLm948=
cloud.o-forge.io/core/oc-lib v0.0.0-20260302152414-542b0b73aba5/go.mod h1:+ENuvBfZdESSvecoqGY/wSvRlT3vinEolxKgwbOhUpA=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Knetic/govaluate v3.0.0+incompatible/go.mod h1:r7JcOSlj0wfOMncg0iLm8Leh48TZaKVeNIfJntJ2wa0=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alicebob/gopher-json v0.0.0-20180125190556-5a6b3ba71ee6/go.mod h1:SGnFV6hVsYE877CKEZ6tDNTjaSXYUk6QqoIK6PrAtcc=
github.com/alicebob/miniredis v2.5.0+incompatible/go.mod h1:8HZjEj4yU0dwhYHky+DxYx+6BMjkBbe5ONFIF1MXffk=
github.com/beego/beego v1.12.13 h1:g39O1LGLTiPejWVqQKK/TFGrroW9BCZQz6/pf4S8IRM=
github.com/beego/beego v1.12.13/go.mod h1:QURFL1HldOcCZAxnc1cZ7wrplsYR5dKPHFjmk6WkLAs=
github.com/beego/beego/v2 v2.3.1 h1:7MUKMpJYzOXtCUsTEoXOxsDV/UcHw6CPbaWMlthVNsc=
github.com/beego/beego/v2 v2.3.1/go.mod h1:5cqHsOHJIxkq44tBpRvtDe59GuVRVv/9/tyVDxd5ce4=
github.com/Masterminds/semver/v3 v3.4.0 h1:Zog+i5UMtVoCU8oKka5P7i9q9HgrJeGzI9SA1Xbatp0=
github.com/Masterminds/semver/v3 v3.4.0/go.mod h1:4V+yj/TJE1HU9XfppCwVMZq3I84lprf4nC11bSS5beM=
github.com/beego/beego/v2 v2.3.8 h1:wplhB1pF4TxR+2SS4PUej8eDoH4xGfxuHfS7wAk9VBc=
github.com/beego/beego/v2 v2.3.8/go.mod h1:8vl9+RrXqvodrl9C8yivX1e6le6deCK6RWeq8R7gTTg=
github.com/beego/goyaml2 v0.0.0-20130207012346-5545475820dd/go.mod h1:1b+Y/CofkYwXMUU0OhQqGvsY2Bvgr4j6jfT699wyZKQ=
github.com/beego/x2j v0.0.0-20131220205130-a0352aadc542/go.mod h1:kSeGC/p1AbBiEp5kat81+DSQrZenVBZXklMLaELspWU=
github.com/benbjohnson/clock v1.3.5 h1:VvXlSJBzZpA/zum6Sj74hxwYI2DIxRWuNIoXAzHZz5o=
github.com/benbjohnson/clock v1.3.5/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/biter777/countries v1.7.5 h1:MJ+n3+rSxWQdqVJU8eBy9RqcdH6ePPn4PJHocVWUa+Q=
github.com/biter777/countries v1.7.5/go.mod h1:1HSpZ526mYqKJcpT5Ti1kcGQ0L0SrXWIaptUWjFfv2E=
github.com/bradfitz/gomemcache v0.0.0-20180710155616-bc664df96737/go.mod h1:PmM6Mmwb0LSuEubjR8N7PtNe1KxZLtOUHtbeikc5h60=
github.com/casbin/casbin v1.7.0/go.mod h1:c67qKN6Oum3UF5Q1+BByfFxkwKvhwW57ITjqwtzR1KE=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/cloudflare/golz4 v0.0.0-20150217214814-ef862a3cdc58/go.mod h1:EOBUe0h4xcZ5GoxqC5SDxFQ8gwyZPKQoEzownBlhI80=
github.com/coreos/etcd v3.3.17+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/couchbase/go-couchbase v0.0.0-20201216133707-c04035124b17/go.mod h1:+/bddYDxXsf9qt0xpDUtRR47A2GjaXmGGAqQ/k3GJ8A=
github.com/couchbase/gomemcached v0.1.2-0.20201224031647-c432ccf49f32/go.mod h1:mxliKQxOv84gQ0bJWbI+w9Wxdpt9HjDvgW9MjCym5Vo=
github.com/couchbase/goutils v0.0.0-20210118111533-e33d3ffb5401/go.mod h1:BQwMFlJzDjFDG3DJUdU0KORxn88UlsOULuxLExMh3Hs=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/cupcake/rdb v0.0.0-20161107195141-43ba34106c76/go.mod h1:vYwsqCOLxGiisLwp9rITslkFNpZD5rz43tf41QFkTWY=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c h1:pFUpOrbxDR6AkioZ1ySsx5yxlDQZ8stG2b88gTPxgJU=
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c/go.mod h1:6UhI8N9EjYm1c2odKpFpAYeR8dsBeM7PtzQhRgxRr9U=
github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8=
github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
github.com/dunglas/httpsfv v1.1.0 h1:Jw76nAyKWKZKFrpMMcL76y35tOpYHqQPzHQiwDvpe54=
github.com/dunglas/httpsfv v1.1.0/go.mod h1:zID2mqw9mFsnt7YC3vYQ9/cjq30q41W+1AnDwH8TiMg=
github.com/edsrzf/mmap-go v0.0.0-20170320065105-0bce6a688712/go.mod h1:YO35OhQPt3KJa3ryjFM5Bs14WD66h8eGKpfaBNrHW5M=
github.com/elastic/go-elasticsearch/v6 v6.8.5/go.mod h1:UwaDJsD3rWLM5rKNFzv9hgox93HoX8utj1kxD9aFUcI=
github.com/elazarl/go-bindata-assetfs v1.0.0/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4=
github.com/elazarl/go-bindata-assetfs v1.0.1 h1:m0kkaHRKEu7tUIUFVwhGGGYClXvyl4RE03qmvRTNfbw=
github.com/elazarl/go-bindata-assetfs v1.0.1/go.mod h1:v+YaWX3bdea5J/mo8dSETolEo7R71Vk1u8bnjau5yw4=
github.com/emicklei/go-restful/v3 v3.12.2 h1:DhwDP0vY3k8ZzE0RunuJy8GhNpPL6zqLkDf9B/a0/xU=
github.com/emicklei/go-restful/v3 v3.12.2/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/etcd-io/etcd v3.3.17+incompatible/go.mod h1:cdZ77EstHBwVtD6iTgzgvogwcjo9m4iOqoijouPJ4bs=
github.com/filecoin-project/go-clock v0.1.0 h1:SFbYIM75M8NnFm1yMHhN9Ahy3W5bEZV9gd6MPfXbKVU=
github.com/filecoin-project/go-clock v0.1.0/go.mod h1:4uB/O4PvOjlx1VCMdZ9MyDZXRm//gkj1ELEbxfI1AZs=
github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg=
github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/frankban/quicktest v1.14.6/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/gabriel-vasile/mimetype v1.4.5 h1:J7wGKdGu33ocBOhGy0z653k/lFKLFDPJMG8Gql0kxn4=
github.com/gabriel-vasile/mimetype v1.4.5/go.mod h1:ibHel+/kbxn9x2407k1izTA1S81ku1z/DlgOW2QE0M4=
github.com/gabriel-vasile/mimetype v1.4.9 h1:5k+WDwEsD9eTLL8Tz3L0VnmVh9QxGjRmjBvAG7U/oYY=
github.com/gabriel-vasile/mimetype v1.4.9/go.mod h1:WnSQhFKJuBlRyLiKohA/2DtIlPFAbguNaG7QCHcyGok=
github.com/fxamacker/cbor/v2 v2.9.0 h1:NpKPmjDBgUfBms6tr6JZkTHtfFGcMKsw3eGcmD/sapM=
github.com/fxamacker/cbor/v2 v2.9.0/go.mod h1:vM4b+DJCtHn+zz7h3FFp/hDAI9WNWCsZj23V5ytsSxQ=
github.com/gabriel-vasile/mimetype v1.4.10 h1:zyueNbySn/z8mJZHLt6IPw0KoZsiQNszIpU+bX4+ZK0=
github.com/gabriel-vasile/mimetype v1.4.10/go.mod h1:d+9Oxyo1wTzWdyVUPMmXFvp4F9tea18J8ufA774AB3s=
github.com/glendc/gopher-json v0.0.0-20170414221815-dc4743023d0c/go.mod h1:Gja1A+xZ9BoviGJNA2E9vFkPjjsl+CoJxSXiQM1UXtw=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ=
github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY=
github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE=
github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14=
github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE=
github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ=
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.22.0 h1:k6HsTZ0sTnROkhS//R0O+55JgM8C4Bx7ia+JlgcnOao=
github.com/go-playground/validator/v10 v10.22.0/go.mod h1:dbuPbCMFw/DrkbEynArYaCwl3amGuJotoKCe95atGMM=
github.com/go-playground/validator/v10 v10.26.0 h1:SP05Nqhjcvz81uJaRfEV0YBSSSGMc/iMaVtFbr3Sw2k=
github.com/go-playground/validator/v10 v10.26.0/go.mod h1:I5QpIEbmr8On7W0TktmJAumgzX4CA1XNl4ZmDuVHKKo=
github.com/go-playground/validator/v10 v10.27.0 h1:w8+XrWVMhGkxOaaowyKH35gFydVHOvC0/uWoy2Fzwn4=
github.com/go-playground/validator/v10 v10.27.0/go.mod h1:I5QpIEbmr8On7W0TktmJAumgzX4CA1XNl4ZmDuVHKKo=
github.com/go-redis/redis v6.14.2+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
github.com/go-redis/redis v6.15.9+incompatible h1:K0pv1D7EQUjfyoMql+r/jZqCLizCGKFlFgcHWWmHQjg=
github.com/go-redis/redis v6.15.9+incompatible/go.mod h1:NAIEuMOZ/fxfXJIrKDQDz8wamY7mA7PouImQ2Jvg6kA=
github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI=
github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8=
github.com/go-yaml/yaml v2.1.0+incompatible/go.mod h1:w2MrLa16VYP0jy6N7M5kHaCkaLENm+P+Tv+MfurjSw0=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/snappy v0.0.0-20170215233205-553a64147049/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.4 h1:yAGX7huGHXlcLOEtBnF4w7FQwA26wojNCwOYAEhLjQM=
github.com/golang/snappy v0.0.4/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v1.0.0 h1:Oy607GVXHs7RtbggtPBnr2RmDArIsAefDwvrdWvRhGs=
github.com/golang/snappy v1.0.0/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/gomodule/redigo v2.0.0+incompatible/go.mod h1:B4C85qUVwatsJoIUNIfCRsp7qO0iAmpGFZ4EELWSbC4=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/gnostic-models v0.7.0 h1:qwTtogB15McXDaNqTZdzPJRHvaVJlAl+HVQnLmJEJxo=
github.com/google/gnostic-models v0.7.0/go.mod h1:whL5G0m6dmc5cPxKc5bdKdEN3UjI7OUGxBlw57miDrQ=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gopacket v1.1.19 h1:ves8RnFZPGiFnTS0uPQStjwru6uO6h+nlr9j6fL7kF8=
github.com/google/gopacket v1.1.19/go.mod h1:iJ8V8n6KS+z2U1A8pUwu8bW5SyEMkXJB8Yo/Vo+TKTo=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6 h1:BHT72Gu3keYf3ZEu2J0b1vyeLSOYI8bm5wbJM/8yDe8=
github.com/google/pprof v0.0.0-20250403155104-27863c87afa6/go.mod h1:boTsfXsheKC2y+lKOCMpSfarhxDeIzfZG1jqGcPl3cA=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1 h1:EGx4pi6eqNxGaHF6qqu48+N2wcFQ5qg5FXgOdqsJ5d8=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v0.0.0-20190430165422-3e4dfb77656c h1:7lF+Vz0LqiRidnzC1Oq86fpX1q/iEv2KJdrCtttYjT4=
github.com/gopherjs/gopherjs v0.0.0-20190430165422-3e4dfb77656c/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/goraz/onion v0.1.3 h1:KhyvbDA2b70gcz/d5izfwTiOH8SmrvV43AsVzpng3n0=
github.com/goraz/onion v0.1.3/go.mod h1:XEmz1XoBz+wxTgWB8NwuvRm4RAu3vKxvrmYtzK+XCuQ=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674 h1:JeSE6pjso5THxAzdVpqr6/geYxZytqFMBCOtn/ujyeo=
github.com/gorilla/websocket v1.5.4-0.20250319132907-e064f32e3674/go.mod h1:r4w70xmWCQKmi1ONH4KIaBptdivuRPyosB9RmPlGEwA=
github.com/hashicorp/golang-lru v1.0.2 h1:dV3g9Z/unq5DpblPpw+Oqcv4dU/1omnb4Ok8iPY6p1c=
github.com/hashicorp/golang-lru v1.0.2/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc=
github.com/huin/goupnp v1.3.0/go.mod h1:gnGPsThkYa7bFi/KWmEysQRf48l2dvR5bxr2OFckNX8=
github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/ipfs/boxo v0.35.2 h1:0QZJJh6qrak28abENOi5OA8NjBnZM4p52SxeuIDqNf8=
github.com/ipfs/boxo v0.35.2/go.mod h1:bZn02OFWwJtY8dDW9XLHaki59EC5o+TGDECXEbe1w8U=
github.com/ipfs/go-block-format v0.2.3 h1:mpCuDaNXJ4wrBJLrtEaGFGXkferrw5eqVvzaHhtFKQk=
github.com/ipfs/go-block-format v0.2.3/go.mod h1:WJaQmPAKhD3LspLixqlqNFxiZ3BZ3xgqxxoSR/76pnA=
github.com/ipfs/go-cid v0.6.0 h1:DlOReBV1xhHBhhfy/gBNNTSyfOM6rLiIx9J7A4DGf30=
github.com/ipfs/go-cid v0.6.0/go.mod h1:NC4kS1LZjzfhK40UGmpXv5/qD2kcMzACYJNntCUiDhQ=
github.com/ipfs/go-datastore v0.9.0 h1:WocriPOayqalEsueHv6SdD4nPVl4rYMfYGLD4bqCZ+w=
github.com/ipfs/go-datastore v0.9.0/go.mod h1:uT77w/XEGrvJWwHgdrMr8bqCN6ZTW9gzmi+3uK+ouHg=
github.com/ipfs/go-detect-race v0.0.1 h1:qX/xay2W3E4Q1U7d9lNs1sU9nvguX0a7319XbyQ6cOk=
github.com/ipfs/go-detect-race v0.0.1/go.mod h1:8BNT7shDZPo99Q74BpGMK+4D8Mn4j46UU0LZ723meps=
github.com/ipfs/go-log/v2 v2.9.1 h1:3JXwHWU31dsCpvQ+7asz6/QsFJHqFr4gLgQ0FWteujk=
github.com/ipfs/go-log/v2 v2.9.1/go.mod h1:evFx7sBiohUN3AG12mXlZBw5hacBQld3ZPHrowlJYoo=
github.com/ipfs/go-test v0.2.3 h1:Z/jXNAReQFtCYyn7bsv/ZqUwS6E7iIcSpJ2CuzCvnrc=
github.com/ipfs/go-test v0.2.3/go.mod h1:QW8vSKkwYvWFwIZQLGQXdkt9Ud76eQXRQ9Ao2H+cA1o=
github.com/ipld/go-ipld-prime v0.21.0 h1:n4JmcpOlPDIxBcY037SVfpd1G+Sj1nKZah0m6QH9C2E=
github.com/ipld/go-ipld-prime v0.21.0/go.mod h1:3RLqy//ERg/y5oShXXdx5YIp50cFGOanyMctpPjsvxQ=
github.com/jackpal/go-nat-pmp v1.0.2 h1:KzKSgb7qkJvOUTqYl9/Hg/me3pWgBmERKrTGD7BdWus=
github.com/jackpal/go-nat-pmp v1.0.2/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jbenet/go-temp-err-catcher v0.1.0 h1:zpb3ZH6wIE8Shj2sKS+khgRvf7T7RABoLk/+KKHggpk=
github.com/jbenet/go-temp-err-catcher v0.1.0/go.mod h1:0kJRvmDZXNMIiJirNPEYfhpPwbGVtZVWC34vc5WLsDk=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY=
github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y=
github.com/json-iterator/go v1.1.8/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.17.9 h1:6KIumPrER1LHsvBVuDa0r5xaG0Es51mhhB9BQB2qeMA=
github.com/klauspost/compress v1.17.9/go.mod h1:Di0epgTjJY877eYKx5yC51cX2A2Vl2ibi7bDH9ttBbw=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
github.com/klauspost/cpuid/v2 v2.3.0/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/koron/go-ssdp v0.0.6 h1:Jb0h04599eq/CY7rB5YEqPS83HmRfHP2azkxMN2rFtU=
github.com/koron/go-ssdp v0.0.6/go.mod h1:0R9LfRJGek1zWTjN3JUNlm5INCDYGpRDfAptnct63fI=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
@@ -216,10 +151,8 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/ledisdb/ledisdb v0.0.0-20200510135210-d35789ec47e6/go.mod h1:n931TsDuKuq+uX4v1fulaMbA/7ZLLhjc85h7chZGBCQ=
github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/lib/pq v1.0.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
github.com/libp2p/go-cidranger v1.1.0 h1:ewPN8EZ0dd1LSnrtuwd4709PXVcITVeuwbag38yPW7c=
@@ -240,6 +173,8 @@ github.com/libp2p/go-libp2p-record v0.3.1 h1:cly48Xi5GjNw5Wq+7gmjfBiG9HCzQVkiZOU
github.com/libp2p/go-libp2p-record v0.3.1/go.mod h1:T8itUkLcWQLCYMqtX7Th6r7SexyUJpIyPgks757td/E=
github.com/libp2p/go-libp2p-routing-helpers v0.7.5 h1:HdwZj9NKovMx0vqq6YNPTh6aaNzey5zHD7HeLJtq6fI=
github.com/libp2p/go-libp2p-routing-helpers v0.7.5/go.mod h1:3YaxrwP0OBPDD7my3D0KxfR89FlcX/IEbxDEDfAmj98=
github.com/libp2p/go-libp2p-testing v0.12.0 h1:EPvBb4kKMWO29qP4mZGyhVzUyR25dvfUIK5WDu6iPUA=
github.com/libp2p/go-libp2p-testing v0.12.0/go.mod h1:KcGDRXyN7sQCllucn1cOOS+Dmm7ujhfEyXQL5lvkcPg=
github.com/libp2p/go-msgio v0.3.0 h1:mf3Z8B1xcFN314sWX+2vOTShIE0Mmn2TXn3YCUQGNj0=
github.com/libp2p/go-msgio v0.3.0/go.mod h1:nyRM819GmVaF9LX3l03RMh10QdOroF++NBbxAb0mmDM=
github.com/libp2p/go-netroute v0.4.0 h1:sZZx9hyANYUx9PZyqcgE/E1GUG3iEtTZHUEvdtXT7/Q=
@@ -249,9 +184,12 @@ github.com/libp2p/go-reuseport v0.4.0/go.mod h1:ZtI03j/wO5hZVDFo2jKywN6bYKWLOy8S
github.com/libp2p/go-yamux/v5 v5.0.1 h1:f0WoX/bEF2E8SbE4c/k1Mo+/9z0O4oC/hWEA+nfYRSg=
github.com/libp2p/go-yamux/v5 v5.0.1/go.mod h1:en+3cdX51U0ZslwRdRLrvQsdayFt3TSUKvBGErzpWbU=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/marcopolo/simnet v0.0.4 h1:50Kx4hS9kFGSRIbrt9xUS3NJX33EyPqHVmpXvaKLqrY=
github.com/marcopolo/simnet v0.0.4/go.mod h1:tfQF1u2DmaB6WHODMtQaLtClEf3a296CKQLq5gAsIS0=
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd h1:br0buuQ854V8u83wA0rVZ8ttrq5CpaPZdvrK0LP2lOk=
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd/go.mod h1:QuCEs1Nt24+FYQEqAAncTDPJIuGs+LxK1MCiFL25pMU=
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
github.com/mattn/go-colorable v0.1.14/go.mod h1:6LmQG8QLFO4G5z1gPvYEzlUgJ2wF+stgPZH1UqBm1s8=
@@ -259,10 +197,9 @@ github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v2.0.3+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/miekg/dns v1.1.68 h1:jsSRkNozw7G/mnmXULynzMNIsgY2dHC8LO6U6Ij2JEA=
github.com/miekg/dns v1.1.68/go.mod h1:fujopn7TB3Pu3JM69XaawiU0wqjpL9/8xGop5UrTPps=
github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c h1:bzE/A84HN25pxAuk9Eej1Kz9OUelF97nAc82bDquQI8=
github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c/go.mod h1:0SQS9kMwD2VsyFEB++InYyBJroV/FRmBgcydeSUcJms=
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b h1:z78hV3sbSMAUoyUMM0I83AUIT6Hu17AWfgjzIbtrYFc=
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b/go.mod h1:lxPUiZwKoFL8DUUmalo2yJJUCxbPKtm8OKfqr2/FTNU=
@@ -276,9 +213,13 @@ github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh
github.com/mitchellh/mapstructure v1.5.0 h1:jeMsZIYE/09sWLaz43PL7Gy6RuMjD2eJVyuac5Z2hdY=
github.com/mitchellh/mapstructure v1.5.0/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee h1:W5t00kpgFdJifH4BDsTlE89Zl93FEloxaWZfGcifgq8=
github.com/modern-go/reflect2 v1.0.3-0.20250322232337-35a7c28c31ee/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/montanaflynn/stats v0.7.1 h1:etflOAAHORrCC44V+aR6Ftzort912ZU+YLiSTuV8eaE=
github.com/montanaflynn/stats v0.7.1/go.mod h1:etXPPgVO6n31NxCd9KQUMvCM+ve0ruNzt6R8Bnaayow=
github.com/mr-tron/base58 v1.1.2/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
@@ -308,29 +249,21 @@ github.com/multiformats/go-varint v0.1.0 h1:i2wqFp4sdl3IcIxfAonHQV9qU5OsZ4Ts9IOo
github.com/multiformats/go-varint v0.1.0/go.mod h1:5KVAVXegtfmNQQm/lCY+ATvDzvJJhSkUlGQV9wgObdI=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nats-io/nats.go v1.37.0 h1:07rauXbVnnJvv1gfIyghFEo6lUcYRY0WXc3x7x0vUxE=
github.com/nats-io/nats.go v1.37.0/go.mod h1:Ubdu4Nh9exXdSz0RVWRFBbRfrbSxOYd26oF0wkWclB8=
github.com/nats-io/nats.go v1.43.0 h1:uRFZ2FEoRvP64+UUhaTokyS18XBCR/xM2vQZKO4i8ug=
github.com/nats-io/nats.go v1.43.0/go.mod h1:iRWIPokVIFbVijxuMQq4y9ttaBTMe0SFdlZfMDd+33g=
github.com/nats-io/nkeys v0.4.7 h1:RwNJbbIdYCoClSDNY7QVKZlyb/wfT6ugvFCiKy6vDvI=
github.com/nats-io/nkeys v0.4.7/go.mod h1:kqXRgRDPlGy7nGaEDMuYzmiJCIAAWDK0IMBtDmGD0nc=
github.com/nats-io/nkeys v0.4.11 h1:q44qGV008kYd9W1b1nEBkNzvnWxtRSQ7A8BoqRrcfa0=
github.com/nats-io/nkeys v0.4.11/go.mod h1:szDimtgmfOi9n25JpfIdGw12tZFYXqhGxjhVxsatHVE=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/ogier/pflag v0.0.1/go.mod h1:zkFki7tvTa0tafRvTBIZTvzYyAu6kQhPZFnshFFPE+g=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.12.0 h1:Iw5WCbBcaAAd0fpRb1c9r5YCylv4XDoCSigm1zLevwU=
github.com/onsi/ginkgo v1.12.0/go.mod h1:oUhWkIvk5aDxtKvDDuw8gItl8pKl42LzjC9KZE0HfGg=
github.com/onsi/gomega v1.7.1 h1:K0jcRCwNQM3vFGh1ppMtDh/+7ApJrjldlX8fA0jDTLQ=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/ginkgo v1.16.5 h1:8xi0RTUf59SOSfEtZMvwTvXYMzG4gV23XVHOZiXNtnE=
github.com/onsi/ginkgo/v2 v2.27.2 h1:LzwLj0b89qtIy6SSASkzlNvX6WktqurSHwkk2ipF/Ns=
github.com/onsi/ginkgo/v2 v2.27.2/go.mod h1:ArE1D/XhNXBXCBkKOLkbsb2c81dQHCRcF5zwn/ykDRo=
github.com/onsi/gomega v1.38.2 h1:eZCjf2xjZAqe+LeWvKb5weQ+NcPwX84kqJ0cZNxok2A=
github.com/onsi/gomega v1.38.2/go.mod h1:W2MJcYxRGV63b418Ai34Ud0hEdTVXq9NW9+Sx6uXf3k=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=
github.com/pelletier/go-toml v1.0.1/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.6.0/go.mod h1:5N711Q9dKgbdkxHL+MEfF31hpT7l0S0s/t2kKREewys=
github.com/peterh/liner v1.0.1-0.20171122030339-3681c2a91233/go.mod h1:xIteQHvHuaLYG9IFj6mSxM0fCKrs34IrEQUhOYuGPHc=
github.com/pion/datachannel v1.5.10 h1:ly0Q26K1i6ZkGf42W7D4hQYR90pZwzFOjTq5AuCKk4o=
github.com/pion/datachannel v1.5.10/go.mod h1:p/jJfC9arb29W7WrxyKbepTU20CFgyx5oLo8Rs4Py/M=
github.com/pion/dtls/v2 v2.2.7/go.mod h1:8WiMkebSHFD0T+dIU+UeBaoV7kDhOW5oDCzZ7WZ/F9s=
@@ -373,46 +306,17 @@ github.com/pion/turn/v4 v4.0.2 h1:ZqgQ3+MjP32ug30xAbD6Mn+/K4Sxi3SdNOTFf+7mpps=
github.com/pion/turn/v4 v4.0.2/go.mod h1:pMMKP/ieNAG/fN5cZiN4SDuyKsXtNTr0ccN7IToA1zs=
github.com/pion/webrtc/v4 v4.1.2 h1:mpuUo/EJ1zMNKGE79fAdYNFZBX790KE7kQQpLMjjR54=
github.com/pion/webrtc/v4 v4.1.2/go.mod h1:xsCXiNAmMEjIdFxAYU0MbB3RwRieJsegSB2JZsGN+8U=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/polydawn/refmt v0.89.0 h1:ADJTApkvkeBZsN0tBTx8QjpD9JkmxbKp0cxfr9qszm4=
github.com/polydawn/refmt v0.89.0/go.mod h1:/zvteZs/GwLtCgZ4BL6CBsk9IKIlexP43ObX9AxTqTw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.7.0/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.20.2 h1:5ctymQzZlyOON1666svgwn3s6IKWgfbjsejTMiXIyjg=
github.com/prometheus/client_golang v1.20.2/go.mod h1:PIEt8X02hGcP8JWbeHyeZ53Y/jReSnHgO035n//V5WE=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E=
github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.57.0 h1:Ro/rKjwdq9mZn1K5QPctzh+MA4Lp0BuYk5ZZEVhoNcY=
github.com/prometheus/common v0.57.0/go.mod h1:7uRPFSUTbfZWsJ7MHY56sqt7hLQu3bxXHDnNhl8E9qI=
github.com/prometheus/common v0.64.0 h1:pdZeA+g617P7oGv1CzdTzyeShxAGrTBsolKNOLQPGO4=
github.com/prometheus/common v0.64.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/common v0.65.0 h1:QDwzd+G1twt//Kwj/Ww6E9FQq1iVMmODnILtW1t2VzE=
github.com/prometheus/common v0.65.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc=
github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk=
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
github.com/prometheus/procfs v0.17.0 h1:FuLQ+05u4ZI+SS/w9+BWEM2TXiHKsUQ9TADiRH7DuK0=
github.com/prometheus/procfs v0.17.0/go.mod h1:oPQLaDAMRbA+u8H5Pbfq+dl3VDAvHxMUOVhe0wYB2zw=
github.com/quic-go/qpack v0.6.0 h1:g7W+BMYynC1LbYLSqRt8PBg5Tgwxn214ZZR34VIOjz8=
@@ -421,28 +325,15 @@ github.com/quic-go/quic-go v0.59.0 h1:OLJkp1Mlm/aS7dpKgTc6cnpynnD2Xg7C1pwL6vy/SA
github.com/quic-go/quic-go v0.59.0/go.mod h1:upnsH4Ju1YkqpLXC305eW3yDZ4NfnNbmQRCMWS58IKU=
github.com/quic-go/webtransport-go v0.10.0 h1:LqXXPOXuETY5Xe8ITdGisBzTYmUOy5eSj+9n4hLTjHI=
github.com/quic-go/webtransport-go v0.10.0/go.mod h1:LeGIXr5BQKE3UsynwVBeQrU1TPrbh73MGoC6jd+V7ow=
github.com/robfig/cron v1.2.0 h1:ZjScXvvxeQ63Dbyxy76Fj3AT3Ut0aKsyd2/tl3DTMuQ=
github.com/robfig/cron v1.2.0/go.mod h1:JGuDeoQd7Z6yL4zQhZ3OPEVHB7fL6Ka6skscFHfmt2k=
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ=
github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.33.0 h1:1cU2KZkvPxNyfgEmhHAz/1A9Bz+llsdYzklWFzgp0r8=
github.com/rs/zerolog v1.33.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/shiena/ansicolor v0.0.0-20151119151921-a422bbe96644/go.mod h1:nkxAfR/5quYxwPZhyDxgasBMnRtBZd0FCEpawpjMUFg=
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02 h1:v9ezJDHA1XGxViAUSIoO/Id7Fl63u6d0YmsAm+/p2hs=
github.com/shiena/ansicolor v0.0.0-20230509054315-a9deabde6e02/go.mod h1:RF16/A3L0xSa0oSERcnhd8Pu3IXSDZSK2gmGIMsttFE=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/siddontang/go v0.0.0-20170517070808-cb568a3e5cc0/go.mod h1:3yhqj7WBBfRhbBlzyOC3gUxftwsU0u8gqevxwIHQpMw=
github.com/siddontang/goredis v0.0.0-20150324035039-760763f78400/go.mod h1:DDcKzU3qCuvj/tPnimWSsZZzvk9qvkvrIL5naVBPh5s=
github.com/siddontang/rdb v0.0.0-20150307021120-fc89ed2e418d/go.mod h1:AMEsy7v5z92TR1JKMkLLoaOQk++LVnOKL3ScbJ8GNGA=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/skarademir/naturalsort v0.0.0-20150715044055-69a5d87bef62/go.mod h1:oIdVclZaltY1Nf7OQUkg1/2jImBJ+ZfKZuDIRSwk3p0=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
github.com/smartystreets/assertions v1.2.0 h1:42S6lae5dvLc7BrLu/0ugRtcFVjoJNMC/N3yZFZkDFs=
@@ -452,37 +343,32 @@ github.com/smartystreets/goconvey v1.7.2 h1:9RBaZCeXEQ3UselpuwUQHltGVXvdwm6cv1hg
github.com/smartystreets/goconvey v1.7.2/go.mod h1:Vw0tHAZW6lzCRk3xgdin6fKYcG+G3Pg9vgXWeJpQFMM=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/ssdb/gossdb v0.0.0-20180723034631-88f6b59b84ec/go.mod h1:QBvMkMya+gXctz3kmljlUCu/yB3GZ6oee+dUozsezQE=
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/syndtr/goleveldb v0.0.0-20160425020131-cfa635847112/go.mod h1:Z4AUp2Km+PwemOoO/VB5AOx9XSsIItzFjoJlOSiYmn0=
github.com/tidwall/gjson v1.17.3 h1:bwWLZU7icoKRG+C+0PNwIKC6FCJO/Q3p2pZvuP0jN94=
github.com/tidwall/gjson v1.17.3/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/ugorji/go v0.0.0-20171122102828-84cb69a8af83/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/urfave/cli v1.22.10/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/warpfork/go-wish v0.0.0-20220906213052-39a1cc7a02d0 h1:GDDkbFiaK8jsSDJfjId/PEGEShv6ugrt4kYsC5UIDaQ=
github.com/warpfork/go-wish v0.0.0-20220906213052-39a1cc7a02d0/go.mod h1:x6AKhvSSexNrVSrViXSHUEbICjmGXhtgABaHIySUSGw=
github.com/wendal/errors v0.0.0-20181209125328-7f31f4b264ec/go.mod h1:Q12BUT7DqIlHRmgv3RskH+UCM/4eqVMgI0EMmlSpAXc=
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 h1:EKhdznlJHPMoKr0XTrX+IlJs1LH3lyx2nfr1dOlZ79k=
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1/go.mod h1:8UvriyWtv5Q5EOgjHaSseUEdkQfvwFv1I/In/O2M9gc=
github.com/wlynxg/anet v0.0.3/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/wlynxg/anet v0.0.5 h1:J3VJGi1gvo0JwZ/P1/Yc/8p63SoW98B5dHkYDmpgvvU=
github.com/wlynxg/anet v0.0.5/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM=
github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg=
github.com/xdg-go/pbkdf2 v1.0.0 h1:Su7DPu48wXMwC3bs7MCNG+z4FhcyEuz5dlvchbq0B0c=
github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI=
github.com/xdg-go/scram v1.1.2 h1:FHX5I5B4i4hKRVRBCFRxq1iQRej7WO3hhBuJf+UUySY=
@@ -494,11 +380,6 @@ github.com/youmark/pkcs8 v0.0.0-20240726163527-a2c0da244d78/go.mod h1:aL8wCCfTfS
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
github.com/yuin/gopher-lua v0.0.0-20171031051903-609c9cd26973/go.mod h1:aEV29XrmTYFr3CiRxZeGHpkvbwq+prZduBqMaascyCU=
go.mongodb.org/mongo-driver v1.16.1 h1:rIVLL3q0IHM39dvE+z2ulZLp9ENZKThVfuvN/IiN4l8=
go.mongodb.org/mongo-driver v1.16.1/go.mod h1:oB6AhJQvFQL4LEHyXi6aJzQJtBiTQHiAd83l0GdFaiw=
go.mongodb.org/mongo-driver v1.17.3 h1:TQyXhnsWfWtgAhMtOgtYHMTkZIfBTpMTsMnd9ZBeHxQ=
go.mongodb.org/mongo-driver v1.17.3/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
go.mongodb.org/mongo-driver v1.17.4 h1:jUorfmVzljjr0FLzYQsGP8cgN/qzzxlY9Vh0C9KFXVw=
go.mongodb.org/mongo-driver v1.17.4/go.mod h1:Hy04i7O2kC4RS06ZrhPRqj/u4DTYkFDAAccj+rVKqgQ=
go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
@@ -513,6 +394,8 @@ go.uber.org/dig v1.19.0 h1:BACLhebsYdpQ7IROQ1AGPjrXcP5dF80U3gKoFzbaq/4=
go.uber.org/dig v1.19.0/go.mod h1:Us0rSJiThwCv2GteUN0Q7OKvU7n5J4dxZ9JKUXozFdE=
go.uber.org/fx v1.24.0 h1:wE8mruvpg2kiiL1Vqd0CC+tr0/24XIB10Iwp2lLWzkg=
go.uber.org/fx v1.24.0/go.mod h1:AmDeGyS+ZARGKM4tlH4FY2Jr63VjbEDJHtqXTGP5hbo=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/mock v0.5.2 h1:LbtPTcP8A5k9WPXj54PPPbjcI4Y6lhyOZXn+VS7wNko=
go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
@@ -521,25 +404,19 @@ go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191112222119-e1110fd1c708/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200602180216-279210d13fed/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
golang.org/x/crypto v0.12.0/go.mod h1:NF0Gs7EO5K4qLn+Ylc+fih8BSTeIjAP05siRnAh98yw=
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
golang.org/x/crypto v0.26.0 h1:RrRspgV4mU+YwB4FYnuBoKsUapNIL5cohGAmSH3azsw=
golang.org/x/crypto v0.26.0/go.mod h1:GY7jblb9wI+FOo5y8/S2oY4zWP07AkOJ4+jxCqdqn54=
golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8=
golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw=
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
golang.org/x/exp v0.0.0-20260112195511-716be5621a96 h1:Z/6YuSHTLOHfNFdb8zVZomZr7cqNgTJvA8+Qz75D8gU=
@@ -552,11 +429,8 @@ golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.32.0 h1:9F4d3PHLljb6x//jOyokMv3eX+YDeepZSEo3mFJy93c=
golang.org/x/mod v0.32.0/go.mod h1:SgipZ/3h2Ci89DlEtEXWUk/HteuRin+HHhN+WbNhguU=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
@@ -568,43 +442,22 @@ golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
golang.org/x/net v0.28.0 h1:a9JDOJc5GMUJ0+UDqmLT86WiEy7iWyIhz8gz8E4e5hE=
golang.org/x/net v0.28.0/go.mod h1:yqtgsTWOOnlGLG9GFRrK3++bGOUEkNBoHZc8MEDWPNg=
golang.org/x/net v0.40.0 h1:79Xs7wF06Gbdcg4kdCCIQArK11Z1hr5POQ6+fIYHNuY=
golang.org/x/net v0.40.0/go.mod h1:y0hY0exeL2Pku80/zKK7tpntoX23cqL3Oa6njdgRtds=
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/oauth2 v0.32.0 h1:jsCblLleRMDrxMN29H3z/k1KliIvpLgCkE6R8FXXNgY=
golang.org/x/oauth2 v0.32.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ=
golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.14.0 h1:woo0S4Yywslg6hp4eUFjTVOyKt0RookbpAHG4c1HmhQ=
golang.org/x/sync v0.14.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.15.0 h1:KWH3jNZsfyT6xfAfKiz6MRNmd46ByHDYaZ7KSkCtdW8=
golang.org/x/sync v0.15.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200602225109-6fdc65e7d980/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -618,15 +471,10 @@ golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.24.0 h1:Twjiwq9dn6R1fQcyiK+wQyHWfaz/BJB+YIpzU/Cv3Xg=
golang.org/x/sys v0.24.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2 h1:O1cMQHRfwNpDfDJerqRoE2oD+AFlyid87D40L/OkkJo=
golang.org/x/telemetry v0.0.0-20260109210033-bd525da824e2/go.mod h1:b7fPSJ0pKZ3ccUh8gnTONJxhn3c/PS6tyzQvyqw4iA8=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
@@ -634,6 +482,8 @@ golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.11.0/go.mod h1:zC9APTIj3jG3FdV/Ons+XE1riIZXG4aZ4GTHiPZJPIU=
golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY=
golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY=
golang.org/x/term v0.39.0/go.mod h1:yxzUCTP/U+FzoxfdKmLaA0RV1WgE0VY7hXBwKtY/4ww=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
@@ -642,12 +492,6 @@ golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.17.0 h1:XtiM5bkSOt+ewxlOE/aE/AKEHibwj/6gvWMl9Rsh0Qc=
golang.org/x/text v0.17.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY=
golang.org/x/text v0.25.0 h1:qVyWApTSYLk/drJRO5mDlNYskwQznZmkpV2c8q9zls4=
golang.org/x/text v0.25.0/go.mod h1:WEdwpYrmk1qmdHvhkSTNPm3app7v4rsT8F2UD6+VHIA=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
@@ -668,38 +512,42 @@ golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.17.0 h1:VbpOemQlsSMrYmn7T2OUvQ4dqxQXU+ouZFQsZOx50z4=
gonum.org/v1/gonum v0.17.0/go.mod h1:El3tOrEuMpv2UdMrbNlKEh9vd86bmQ6vqIcDwxEOc1E=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.34.2 h1:6xV6lTsCfpGD21XK49h7MhtcApnLqkfYgPcdHftf6hg=
google.golang.org/protobuf v1.34.2/go.mod h1:qYOHts0dSfpeUzUFpOMr/WGzszTmLH+DiWniOlNbLDw=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/mgo.v2 v2.0.0-20190816093944-a6b53ec6cb22/go.mod h1:yeKp02qBN3iKW1OzL3MGk2IdtZzaj7SFntXj72NppTA=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/evanphx/json-patch.v4 v4.13.0 h1:czT3CmqEaQ1aanPc5SdlgQrrEIb8w/wwCvWWnfEbYzo=
gopkg.in/evanphx/json-patch.v4 v4.13.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M=
gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
k8s.io/api v0.35.1 h1:0PO/1FhlK/EQNVK5+txc4FuhQibV25VLSdLMmGpDE/Q=
k8s.io/api v0.35.1/go.mod h1:28uR9xlXWml9eT0uaGo6y71xK86JBELShLy4wR1XtxM=
k8s.io/apimachinery v0.35.1 h1:yxO6gV555P1YV0SANtnTjXYfiivaTPvCTKX6w6qdDsU=
k8s.io/apimachinery v0.35.1/go.mod h1:jQCgFZFR1F4Ik7hvr2g84RTJSZegBc8yHgFWKn//hns=
k8s.io/client-go v0.35.1 h1:+eSfZHwuo/I19PaSxqumjqZ9l5XiTEKbIaJ+j1wLcLM=
k8s.io/client-go v0.35.1/go.mod h1:1p1KxDt3a0ruRfc/pG4qT/3oHmUj1AhSHEcxNSGg+OA=
k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk=
k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE=
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912 h1:Y3gxNAuB0OBLImH611+UDZcmKS3g6CthxToOb37KgwE=
k8s.io/kube-openapi v0.0.0-20250910181357-589584f1c912/go.mod h1:kdmbQkyfwUagLfXIad1y2TdrjPFWp2Q89B3qkRwf/pQ=
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4 h1:SjGebBtkBqHFOli+05xYbK8YF1Dzkbzn+gDM4X9T4Ck=
k8s.io/utils v0.0.0-20251002143259-bc988d571ff4/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
lukechampine.com/blake3 v1.4.1 h1:I3Smz7gso8w4/TunLKec6K2fn+kyKtDxr/xcQEN84Wg=
lukechampine.com/blake3 v1.4.1/go.mod h1:QFosUxmjB8mnrWFSNwKmvxHpfY72bmD2tQ0kBMM3kwo=
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730 h1:IpInykpT6ceI+QxKBbEflcR5EXP7sU1kvOlxwZh5txg=
sigs.k8s.io/json v0.0.0-20250730193827-2d320260d730/go.mod h1:mdzfpAEoE6DHQEN0uh9ZbOCuHbLK5wOm7dK4ctXE9Tg=
sigs.k8s.io/randfill v1.0.0 h1:JfjMILfT8A6RbawdsK2JXGBR5AQVfd+9TbzrlneTyrU=
sigs.k8s.io/randfill v1.0.0/go.mod h1:XeLlZ/jmk4i1HRopwe7/aU3H5n1zNUcX6TM94b3QxOY=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0 h1:jTijUJbW353oVOd9oTlifJqOGEkUw2jB/fXCbTiQEco=
sigs.k8s.io/structured-merge-diff/v6 v6.3.0/go.mod h1:M3W8sfWvn2HhQDIbGWj3S099YozAsymCo/wrT5ohRUE=
sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs=
sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=

13
main.go
View File

@@ -21,20 +21,22 @@ func main() {
oclib.InitDaemon(appname)
// get the right config file
o := oclib.GetConfLoader()
o := oclib.GetConfLoader(appname)
conf.GetConfig().Name = o.GetStringDefault("NAME", "opencloud-demo")
conf.GetConfig().Hostname = o.GetStringDefault("HOSTNAME", "127.0.0.1")
conf.GetConfig().PSKPath = o.GetStringDefault("PSK_PATH", "./psk/psk.key")
conf.GetConfig().NodeEndpointPort = o.GetInt64Default("NODE_ENDPOINT_PORT", 4001)
conf.GetConfig().PublicKeyPath = o.GetStringDefault("PUBLIC_KEY_PATH", "./pem/public.pem")
conf.GetConfig().PrivateKeyPath = o.GetStringDefault("PRIVATE_KEY_PATH", "./pem/private.pem")
conf.GetConfig().IndexerAddresses = o.GetStringDefault("INDEXER_ADDRESSES", "")
conf.GetConfig().NativeIndexerAddresses = o.GetStringDefault("NATIVE_INDEXER_ADDRESSES", "")
conf.GetConfig().PeerIDS = o.GetStringDefault("PEER_IDS", "")
conf.GetConfig().NodeMode = o.GetStringDefault("NODE_MODE", "node")
conf.GetConfig().MinIndexer = o.GetIntDefault("MIN_INDEXER", 1)
conf.GetConfig().MaxIndexer = o.GetIntDefault("MAX_INDEXER", 5)
ctx, stop := signal.NotifyContext(
context.Background(),
os.Interrupt,
@@ -44,11 +46,12 @@ func main() {
fmt.Println(conf.GetConfig().NodeMode)
isNode := strings.Contains(conf.GetConfig().NodeMode, "node")
isIndexer := strings.Contains(conf.GetConfig().NodeMode, "indexer")
isNativeIndexer := strings.Contains(conf.GetConfig().NodeMode, "native-indexer")
if n, err := node.InitNode(isNode, isIndexer); err != nil {
if n, err := node.InitNode(isNode, isIndexer, isNativeIndexer); err != nil {
panic(err)
} else {
<-ctx.Done() // 👈 the only blocking point
<-ctx.Done() // the only blocking point
log.Println("shutting down")
n.Close()
}

3
pem/private10.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIPc7D3Mgb1U2Ipyb/85hA4Ew7dC8zHDEuQYSjqzzRgLK
-----END PRIVATE KEY-----

3
pem/private2.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIE58GDazCyF1jp796ivSmHiCepbkC8TpzliIaQ7eGEpu
-----END PRIVATE KEY-----

3
pem/private3.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIAeX4O7ldwehRSnPkbzuE6csyo63vjvqAcNNujENOKUC
-----END PRIVATE KEY-----

3
pem/private4.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIEkgqINXDLnxIJZs2LEK9O4vdsqk43dwbULGUE25AWuR
-----END PRIVATE KEY-----

3
pem/private5.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIK2oBaOtGNchE09MBRtPd5oEOUcVUQG2ndym5wKExj7R
-----END PRIVATE KEY-----

3
pem/private6.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIE58GDazCyF1jp796ivSmHiCepbkC8TpzliIaQ7eGEpu
-----END PRIVATE KEY-----

3
pem/private7.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIAeX4O7ldwehRSnPkbzuE6csyo63vjvqAcNNujENOKUC
-----END PRIVATE KEY-----

3
pem/private8.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIEkgqINXDLnxIJZs2LEK9O4vdsqk43dwbULGUE25AWuR
-----END PRIVATE KEY-----

3
pem/private9.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PRIVATE KEY-----
MC4CAQAwBQYDK2VwBCIEIBcflxGlZYyUVJoExC94rHZbIyKMwZ+Oh7EDkb0qUlxd
-----END PRIVATE KEY-----

3
pem/public10.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAEomuEQGmGsYVw35C6DB5tfY8LI8jm359ceAxRX8eQ0o=
-----END PUBLIC KEY-----

3
pem/public2.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAIQVeSGwsjPjyepPTnzzYqVxIxviSEjZXU7C7zuNTui4=
-----END PUBLIC KEY-----

3
pem/public3.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAG95Ettl3jTi41HM8le1A9WDmOEq0ANEqpLF7zTZrfXA=
-----END PUBLIC KEY-----

3
pem/public4.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEA/ymOIb0sJ0qCWrf3mKz7ACCvsMXLog/EK533JfNXZTM=
-----END PUBLIC KEY-----

3
pem/public5.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAZ2nLJBL8a5opfa8nFeVj0SZToW8pl4+zgcSUkeZFRO4=
-----END PUBLIC KEY-----

3
pem/public6.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAIQVeSGwsjPjyepPTnzzYqVxIxviSEjZXU7C7zuNTui4=
-----END PUBLIC KEY-----

3
pem/public7.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAG95Ettl3jTi41HM8le1A9WDmOEq0ANEqpLF7zTZrfXA=
-----END PUBLIC KEY-----

3
pem/public8.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEA/ymOIb0sJ0qCWrf3mKz7ACCvsMXLog/EK533JfNXZTM=
-----END PUBLIC KEY-----

3
pem/public9.pem Normal file
View File

@@ -0,0 +1,3 @@
-----BEGIN PUBLIC KEY-----
MCowBQYDK2VwAyEAZ4F3KqOp/5QrPdZGqqX6PYYEGd2snX4Q3AUt9XAG3v8=
-----END PUBLIC KEY-----