Pass + Doc

This commit is contained in:
mr
2026-02-24 14:31:37 +01:00
parent 572da29fd4
commit 779e36aaef
40 changed files with 1875 additions and 353 deletions

View File

@@ -0,0 +1,56 @@
sequenceDiagram
title Node Initialization — Pair A (InitNode)
participant MainA as main (Pair A)
participant NodeA as Node A
participant libp2pA as libp2p (Pair A)
participant DBA as DB Pair A (oc-lib)
participant NATSA as NATS A
participant IndexerA as Indexer (partagé)
participant StreamA as StreamService A
participant PubSubA as PubSubService A
MainA->>NodeA: InitNode(isNode, isIndexer, isNativeIndexer)
NodeA->>NodeA: LoadKeyFromFilePrivate() → priv
NodeA->>NodeA: LoadPSKFromFile() → psk
NodeA->>libp2pA: New(PrivateNetwork(psk), Identity(priv), ListenAddr:4001)
libp2pA-->>NodeA: host A (PeerID_A)
Note over NodeA: isNode == true
NodeA->>libp2pA: NewGossipSub(ctx, host)
libp2pA-->>NodeA: ps (GossipSub)
NodeA->>IndexerA: ConnectToIndexers → SendHeartbeat /opencloud/heartbeat/1.0
Note over IndexerA: Heartbeat long-lived établi<br/>Score qualité calculé (bw + uptime + diversité)
IndexerA-->>NodeA: OK
NodeA->>NodeA: claimInfo(name, hostname)
NodeA->>IndexerA: TempStream /opencloud/record/publish/1.0
NodeA->>IndexerA: json.Encode(PeerRecord A signé)
IndexerA->>IndexerA: DHT.PutValue("/node/"+DID_A, record)
NodeA->>DBA: NewRequestAdmin(PEER).Search(SELF)
DBA-->>NodeA: peer A local (ou UUID généré)
NodeA->>NodeA: StartGC(30s) — GC sur StreamRecords
NodeA->>StreamA: InitStream(ctx, host, PeerID_A, 1000, nodeA)
StreamA->>StreamA: SetStreamHandler(heartbeat/partner, search, planner, ...)
StreamA->>DBA: Search(PEER, PARTNER) → liste partenaires
DBA-->>StreamA: [] (aucun partenaire au démarrage)
StreamA-->>NodeA: StreamService A
NodeA->>PubSubA: InitPubSub(ctx, host, ps, nodeA, streamA)
PubSubA->>PubSubA: subscribeEvents(PB_SEARCH, timeout=-1)
PubSubA-->>NodeA: PubSubService A
NodeA->>NodeA: SubscribeToSearch(ps, callback)
Note over NodeA: callback: GetPeerRecord(evt.From)<br/>→ StreamService.SendResponse
NodeA->>NATSA: ListenNATS(nodeA)
Note over NATSA: Enregistre handlers:<br/>CREATE_RESOURCE, PROPALGATION_EVENT
NodeA-->>MainA: *Node A prêt

View File

@@ -0,0 +1,58 @@
@startuml
title Node Initialization — Pair A (InitNode)
participant "main (Pair A)" as MainA
participant "Node A" as NodeA
participant "libp2p (Pair A)" as libp2pA
participant "DB Pair A (oc-lib)" as DBA
participant "NATS A" as NATSA
participant "Indexer (partagé)" as IndexerA
participant "StreamService A" as StreamA
participant "PubSubService A" as PubSubA
MainA -> NodeA: InitNode(isNode, isIndexer, isNativeIndexer)
NodeA -> NodeA: LoadKeyFromFilePrivate() → priv
NodeA -> NodeA: LoadPSKFromFile() → psk
NodeA -> libp2pA: New(PrivateNetwork(psk), Identity(priv), ListenAddr:4001)
libp2pA --> NodeA: host A (PeerID_A)
note over NodeA: isNode == true
NodeA -> libp2pA: NewGossipSub(ctx, host)
libp2pA --> NodeA: ps (GossipSub)
NodeA -> IndexerA: ConnectToIndexers → SendHeartbeat /opencloud/heartbeat/1.0
note over IndexerA: Heartbeat long-lived établi\nScore qualité calculé (bw + uptime + diversité)
IndexerA --> NodeA: OK
NodeA -> NodeA: claimInfo(name, hostname)
NodeA -> IndexerA: TempStream /opencloud/record/publish/1.0
NodeA -> IndexerA: json.Encode(PeerRecord A signé)
IndexerA -> IndexerA: DHT.PutValue("/node/"+DID_A, record)
NodeA -> DBA: NewRequestAdmin(PEER).Search(SELF)
DBA --> NodeA: peer A local (ou UUID généré)
NodeA -> NodeA: StartGC(30s) — GC sur StreamRecords
NodeA -> StreamA: InitStream(ctx, host, PeerID_A, 1000, nodeA)
StreamA -> StreamA: SetStreamHandler(heartbeat/partner, search, planner, ...)
StreamA -> DBA: Search(PEER, PARTNER) → liste partenaires
DBA --> StreamA: [] (aucun partenaire au démarrage)
StreamA --> NodeA: StreamService A
NodeA -> PubSubA: InitPubSub(ctx, host, ps, nodeA, streamA)
PubSubA -> PubSubA: subscribeEvents(PB_SEARCH, timeout=-1)
PubSubA --> NodeA: PubSubService A
NodeA -> NodeA: SubscribeToSearch(ps, callback)
note over NodeA: callback: GetPeerRecord(evt.From)\n→ StreamService.SendResponse
NodeA -> NATSA: ListenNATS(nodeA)
note over NATSA: Enregistre handlers:\nCREATE_RESOURCE, PROPALGATION_EVENT
NodeA --> MainA: *Node A prêt
@enduml

View File

@@ -0,0 +1,38 @@
sequenceDiagram
title Node Claim — Pair A publie son PeerRecord (claimInfo + publishPeerRecord)
participant DBA as DB Pair A (oc-lib)
participant NodeA as Node A
participant IndexerA as Indexer (partagé)
participant DHT as DHT Kademlia
participant NATSA as NATS A
NodeA->>DBA: NewRequestAdmin(PEER).Search(SELF)
DBA-->>NodeA: existing peer (DID_A) ou nouveau UUID
NodeA->>NodeA: LoadKeyFromFilePrivate() → priv A
NodeA->>NodeA: LoadKeyFromFilePublic() → pub A
NodeA->>NodeA: crypto.MarshalPublicKey(pub A) → pubBytes
NodeA->>NodeA: Build PeerRecord A {<br/> Name, DID, PubKey,<br/> PeerID: PeerID_A,<br/> APIUrl: hostname,<br/> StreamAddress: /ip4/.../tcp/4001/p2p/PeerID_A,<br/> NATSAddress, WalletAddress<br/>}
NodeA->>NodeA: sha256(json(rec)) → hash
NodeA->>NodeA: priv.Sign(hash) → signature
NodeA->>NodeA: rec.ExpiryDate = now + 150s
loop Pour chaque StaticIndexer (Indexer A, B, …)
NodeA->>IndexerA: TempStream /opencloud/record/publish/1.0
NodeA->>IndexerA: json.Encode(PeerRecord A signé)
IndexerA->>IndexerA: Verify signature
IndexerA->>IndexerA: Check heartbeat stream actif pour PeerID_A
IndexerA->>DHT: PutValue("/node/"+DID_A, PeerRecord A)
DHT-->>IndexerA: ok
end
NodeA->>NodeA: rec.ExtractPeer(DID_A, DID_A, pub A)
NodeA->>NATSA: SetNATSPub(CREATE_RESOURCE, {PEER, Peer A JSON})
NATSA->>DBA: Upsert Peer A (SearchAttr: peer_id)
DBA-->>NATSA: ok
NodeA-->>NodeA: *peer.Peer A (SELF)

View File

@@ -0,0 +1,40 @@
@startuml
title Node Claim — Pair A publie son PeerRecord (claimInfo + publishPeerRecord)
participant "DB Pair A (oc-lib)" as DBA
participant "Node A" as NodeA
participant "Indexer (partagé)" as IndexerA
participant "DHT Kademlia" as DHT
participant "NATS A" as NATSA
NodeA -> DBA: NewRequestAdmin(PEER).Search(SELF)
DBA --> NodeA: existing peer (DID_A) ou nouveau UUID
NodeA -> NodeA: LoadKeyFromFilePrivate() → priv A
NodeA -> NodeA: LoadKeyFromFilePublic() → pub A
NodeA -> NodeA: crypto.MarshalPublicKey(pub A) → pubBytes
NodeA -> NodeA: Build PeerRecord A {\n Name, DID, PubKey,\n PeerID: PeerID_A,\n APIUrl: hostname,\n StreamAddress: /ip4/.../tcp/4001/p2p/PeerID_A,\n NATSAddress, WalletAddress\n}
NodeA -> NodeA: sha256(json(rec)) → hash
NodeA -> NodeA: priv.Sign(hash) → signature
NodeA -> NodeA: rec.ExpiryDate = now + 150s
loop Pour chaque StaticIndexer (Indexer A, B, ...)
NodeA -> IndexerA: TempStream /opencloud/record/publish/1.0
NodeA -> IndexerA: json.Encode(PeerRecord A signé)
IndexerA -> IndexerA: Verify signature
IndexerA -> IndexerA: Check heartbeat stream actif pour PeerID_A
IndexerA -> DHT: PutValue("/node/"+DID_A, PeerRecord A)
DHT --> IndexerA: ok
end
NodeA -> NodeA: rec.ExtractPeer(DID_A, DID_A, pub A)
NodeA -> NATSA: SetNATSPub(CREATE_RESOURCE, {PEER, Peer A JSON})
NATSA -> DBA: Upsert Peer A (SearchAttr: peer_id)
DBA --> NATSA: ok
NodeA --> NodeA: *peer.Peer A (SELF)
@enduml

View File

@@ -0,0 +1,47 @@
sequenceDiagram
title Indexer — Heartbeat double (Pair A + Pair B → Indexer partagé)
participant NodeA as Node A
participant NodeB as Node B
participant Indexer as IndexerService (partagé)
Note over NodeA,NodeB: Chaque pair tick toutes les 20s
par Pair A heartbeat
NodeA->>Indexer: NewStream /opencloud/heartbeat/1.0
NodeA->>Indexer: json.Encode(Heartbeat A {Name, DID_A, PeerID_A, IndexersBinded})
Indexer->>Indexer: CheckHeartbeat(host, stream, streams, mu, maxNodes)
Note over Indexer: len(peers) < maxNodes ?
Indexer->>Indexer: getBandwidthChallenge(5122048 bytes, stream)
Indexer->>NodeA: Write(random payload)
NodeA->>Indexer: Echo(same payload)
Indexer->>Indexer: Mesure round-trip → Mbps A
Indexer->>Indexer: getDiversityRate(host, IndexersBinded_A)
Note over Indexer: /24 subnet diversity des indexeurs liés
Indexer->>Indexer: ComputeIndexerScore(uptimeA%, MbpsA%, diversityA%)
Note over Indexer: Score = 0.4×uptime + 0.4×bpms + 0.2×diversity
alt Score A < 75
Indexer->>NodeA: (close stream)
else Score A ≥ 75
Indexer->>Indexer: StreamRecord[PeerID_A] = {DID_A, Heartbeat, UptimeTracker}
end
and Pair B heartbeat
NodeB->>Indexer: NewStream /opencloud/heartbeat/1.0
NodeB->>Indexer: json.Encode(Heartbeat B {Name, DID_B, PeerID_B, IndexersBinded})
Indexer->>Indexer: CheckHeartbeat → getBandwidthChallenge
Indexer->>NodeB: Write(random payload)
NodeB->>Indexer: Echo(same payload)
Indexer->>Indexer: ComputeIndexerScore(uptimeB%, MbpsB%, diversityB%)
alt Score B ≥ 75
Indexer->>Indexer: StreamRecord[PeerID_B] = {DID_B, Heartbeat, UptimeTracker}
end
end
Note over Indexer: Les deux pairs sont désormais<br/>enregistrés avec leurs streams actifs

View File

@@ -0,0 +1,49 @@
@startuml
title Indexer — Heartbeat double (Pair A + Pair B → Indexer partagé)
participant "Node A" as NodeA
participant "Node B" as NodeB
participant "IndexerService (partagé)" as Indexer
note over NodeA,NodeB: Chaque pair tick toutes les 20s
par Pair A heartbeat
NodeA -> Indexer: NewStream /opencloud/heartbeat/1.0
NodeA -> Indexer: json.Encode(Heartbeat A {Name, DID_A, PeerID_A, IndexersBinded})
Indexer -> Indexer: CheckHeartbeat(host, stream, streams, mu, maxNodes)
note over Indexer: len(peers) < maxNodes ?
Indexer -> Indexer: getBandwidthChallenge(512-2048 bytes, stream)
Indexer -> NodeA: Write(random payload)
NodeA -> Indexer: Echo(same payload)
Indexer -> Indexer: Mesure round-trip → Mbps A
Indexer -> Indexer: getDiversityRate(host, IndexersBinded_A)
note over Indexer: /24 subnet diversity des indexeurs liés
Indexer -> Indexer: ComputeIndexerScore(uptimeA%, MbpsA%, diversityA%)
note over Indexer: Score = 0.4×uptime + 0.4×bpms + 0.2×diversity
alt Score A < 75
Indexer -> NodeA: (close stream)
else Score A >= 75
Indexer -> Indexer: StreamRecord[PeerID_A] = {DID_A, Heartbeat, UptimeTracker}
end
else Pair B heartbeat
NodeB -> Indexer: NewStream /opencloud/heartbeat/1.0
NodeB -> Indexer: json.Encode(Heartbeat B {Name, DID_B, PeerID_B, IndexersBinded})
Indexer -> Indexer: CheckHeartbeat → getBandwidthChallenge
Indexer -> NodeB: Write(random payload)
NodeB -> Indexer: Echo(same payload)
Indexer -> Indexer: ComputeIndexerScore(uptimeB%, MbpsB%, diversityB%)
alt Score B >= 75
Indexer -> Indexer: StreamRecord[PeerID_B] = {DID_B, Heartbeat, UptimeTracker}
end
end par
note over Indexer: Les deux pairs sont désormais\nenregistrés avec leurs streams actifs
@enduml

View File

@@ -0,0 +1,41 @@
sequenceDiagram
title Indexer — Pair A publie, Pair B publie (handleNodePublish → DHT)
participant NodeA as Node A
participant NodeB as Node B
participant Indexer as IndexerService (partagé)
participant DHT as DHT Kademlia
Note over NodeA: Après claimInfo ou refresh TTL
par Pair A publie son PeerRecord
NodeA->>Indexer: TempStream /opencloud/record/publish/1.0
NodeA->>Indexer: json.Encode(PeerRecord A {DID_A, PeerID_A, PubKey_A, Expiry, Sig_A})
Indexer->>Indexer: Verify sig_A (reconstruit rec minimal, pubKey_A.Verify)
Indexer->>Indexer: Check StreamRecords[Heartbeat][PeerID_A] existe
alt Heartbeat actif pour A
Indexer->>Indexer: StreamRecord A → DID_A, Record=PeerRecord A, LastSeen=now
Indexer->>DHT: PutValue("/node/"+DID_A, PeerRecord A JSON)
DHT-->>Indexer: ok
else Pas de heartbeat
Indexer->>NodeA: (erreur "no heartbeat", stream close)
end
and Pair B publie son PeerRecord
NodeB->>Indexer: TempStream /opencloud/record/publish/1.0
NodeB->>Indexer: json.Encode(PeerRecord B {DID_B, PeerID_B, PubKey_B, Expiry, Sig_B})
Indexer->>Indexer: Verify sig_B
Indexer->>Indexer: Check StreamRecords[Heartbeat][PeerID_B] existe
alt Heartbeat actif pour B
Indexer->>Indexer: StreamRecord B → DID_B, Record=PeerRecord B, LastSeen=now
Indexer->>DHT: PutValue("/node/"+DID_B, PeerRecord B JSON)
DHT-->>Indexer: ok
else Pas de heartbeat
Indexer->>NodeB: (erreur "no heartbeat", stream close)
end
end
Note over DHT: DHT contient maintenant<br/>"/node/DID_A" et "/node/DID_B"

View File

@@ -0,0 +1,43 @@
@startuml
title Indexer — Pair A publie, Pair B publie (handleNodePublish → DHT)
participant "Node A" as NodeA
participant "Node B" as NodeB
participant "IndexerService (partagé)" as Indexer
participant "DHT Kademlia" as DHT
note over NodeA: Après claimInfo ou refresh TTL
par Pair A publie son PeerRecord
NodeA -> Indexer: TempStream /opencloud/record/publish/1.0
NodeA -> Indexer: json.Encode(PeerRecord A {DID_A, PeerID_A, PubKey_A, Expiry, Sig_A})
Indexer -> Indexer: Verify sig_A (reconstruit rec minimal, pubKey_A.Verify)
Indexer -> Indexer: Check StreamRecords[Heartbeat][PeerID_A] existe
alt Heartbeat actif pour A
Indexer -> Indexer: StreamRecord A → DID_A, Record=PeerRecord A, LastSeen=now
Indexer -> DHT: PutValue("/node/"+DID_A, PeerRecord A JSON)
DHT --> Indexer: ok
else Pas de heartbeat
Indexer -> NodeA: (erreur "no heartbeat", stream close)
end
else Pair B publie son PeerRecord
NodeB -> Indexer: TempStream /opencloud/record/publish/1.0
NodeB -> Indexer: json.Encode(PeerRecord B {DID_B, PeerID_B, PubKey_B, Expiry, Sig_B})
Indexer -> Indexer: Verify sig_B
Indexer -> Indexer: Check StreamRecords[Heartbeat][PeerID_B] existe
alt Heartbeat actif pour B
Indexer -> Indexer: StreamRecord B → DID_B, Record=PeerRecord B, LastSeen=now
Indexer -> DHT: PutValue("/node/"+DID_B, PeerRecord B JSON)
DHT --> Indexer: ok
else Pas de heartbeat
Indexer -> NodeB: (erreur "no heartbeat", stream close)
end
end par
note over DHT: DHT contient maintenant\n"/node/DID_A" et "/node/DID_B"
@enduml

View File

@@ -0,0 +1,49 @@
sequenceDiagram
title Indexer — Pair A résout Pair B (GetPeerRecord + handleNodeGet)
participant NATSA as NATS A
participant DBA as DB Pair A (oc-lib)
participant NodeA as Node A
participant Indexer as IndexerService (partagé)
participant DHT as DHT Kademlia
participant NATSA2 as NATS A (retour)
Note over NodeA: Déclenché par : NATS PB_SEARCH PEER<br/>ou callback SubscribeToSearch
NodeA->>DBA: NewRequestAdmin(PEER).Search(DID_B ou PeerID_B)
DBA-->>NodeA: Peer B local (si connu) → résout DID_B + PeerID_B<br/>sinon utilise la valeur brute
loop Pour chaque StaticIndexer
NodeA->>Indexer: TempStream /opencloud/record/get/1.0
NodeA->>Indexer: json.Encode(GetValue{Key: DID_B, PeerID: PeerID_B})
Indexer->>Indexer: key = "/node/" + DID_B
Indexer->>DHT: SearchValue(ctx 10s, "/node/"+DID_B)
DHT-->>Indexer: channel de bytes (PeerRecord B)
loop Pour chaque résultat DHT
Indexer->>Indexer: Unmarshal → PeerRecord B
alt PeerRecord.PeerID == PeerID_B
Indexer->>Indexer: resp.Found=true, resp.Records[PeerID_B]=PeerRecord B
Indexer->>Indexer: StreamRecord B.LastSeen = now (si heartbeat actif)
end
end
Indexer->>NodeA: json.Encode(GetResponse{Found:true, Records:{PeerID_B: PeerRecord B}})
end
loop Pour chaque PeerRecord retourné
NodeA->>NodeA: rec.Verify() → valide signature de B
NodeA->>NodeA: rec.ExtractPeer(ourDID_A, DID_B, pubKey_B)
alt ourDID_A == DID_B (c'est notre propre entrée)
Note over NodeA: Republier pour rafraîchir le TTL
NodeA->>Indexer: publishPeerRecord(rec) [refresh 2 min]
end
NodeA->>NATSA2: SetNATSPub(CREATE_RESOURCE, {PEER, Peer B JSON,<br/>SearchAttr:"peer_id"})
NATSA2->>DBA: Upsert Peer B dans DB A
DBA-->>NATSA2: ok
end
NodeA-->>NodeA: []*peer.Peer → [Peer B]

View File

@@ -0,0 +1,51 @@
@startuml
title Indexer — Pair A résout Pair B (GetPeerRecord + handleNodeGet)
participant "NATS A" as NATSA
participant "DB Pair A (oc-lib)" as DBA
participant "Node A" as NodeA
participant "IndexerService (partagé)" as Indexer
participant "DHT Kademlia" as DHT
participant "NATS A (retour)" as NATSA2
note over NodeA: Déclenché par : NATS PB_SEARCH PEER\nou callback SubscribeToSearch
NodeA -> DBA: NewRequestAdmin(PEER).Search(DID_B ou PeerID_B)
DBA --> NodeA: Peer B local (si connu) → résout DID_B + PeerID_B\nsinon utilise la valeur brute
loop Pour chaque StaticIndexer
NodeA -> Indexer: TempStream /opencloud/record/get/1.0
NodeA -> Indexer: json.Encode(GetValue{Key: DID_B, PeerID: PeerID_B})
Indexer -> Indexer: key = "/node/" + DID_B
Indexer -> DHT: SearchValue(ctx 10s, "/node/"+DID_B)
DHT --> Indexer: channel de bytes (PeerRecord B)
loop Pour chaque résultat DHT
Indexer -> Indexer: Unmarshal → PeerRecord B
alt PeerRecord.PeerID == PeerID_B
Indexer -> Indexer: resp.Found=true, resp.Records[PeerID_B]=PeerRecord B
Indexer -> Indexer: StreamRecord B.LastSeen = now (si heartbeat actif)
end
end
Indexer -> NodeA: json.Encode(GetResponse{Found:true, Records:{PeerID_B: PeerRecord B}})
end
loop Pour chaque PeerRecord retourné
NodeA -> NodeA: rec.Verify() → valide signature de B
NodeA -> NodeA: rec.ExtractPeer(ourDID_A, DID_B, pubKey_B)
alt ourDID_A == DID_B (c'est notre propre entrée)
note over NodeA: Republier pour rafraîchir le TTL
NodeA -> Indexer: publishPeerRecord(rec) [refresh 2 min]
end
NodeA -> NATSA2: SetNATSPub(CREATE_RESOURCE, {PEER, Peer B JSON,\nSearchAttr:"peer_id"})
NATSA2 -> DBA: Upsert Peer B dans DB A
DBA --> NATSA2: ok
end
NodeA --> NodeA: []*peer.Peer → [Peer B]
@enduml

View File

@@ -0,0 +1,39 @@
sequenceDiagram
title Native Indexer — Enregistrement d'un Indexer auprès du Native
participant IndexerA as Indexer A
participant IndexerB as Indexer B
participant Native as Native Indexer (partagé)
participant DHT as DHT Kademlia
participant PubSub as GossipSub (oc-indexer-registry)
Note over IndexerA,IndexerB: Au démarrage + toutes les 60s (StartNativeRegistration)
par Indexer A s'enregistre
IndexerA->>IndexerA: Build IndexerRegistration{PeerID_A, Addr_A}
IndexerA->>Native: NewStream /opencloud/native/subscribe/1.0
IndexerA->>Native: json.Encode(IndexerRegistration A)
Native->>Native: Decode → liveIndexerEntry{PeerID_A, Addr_A, ExpiresAt=now+66s}
Native->>DHT: PutValue("/indexer/"+PeerID_A, entry A)
DHT-->>Native: ok
Native->>Native: liveIndexers[PeerID_A] = entry A
Native->>Native: knownPeerIDs[PeerID_A] = {}
Native->>PubSub: topic.Publish([]byte(PeerID_A))
Note over PubSub: Gossipé aux autres Natives<br/>→ ils ajoutent PeerID_A à knownPeerIDs<br/>→ refresh DHT au prochain tick 30s
IndexerA->>Native: stream.Close()
and Indexer B s'enregistre
IndexerB->>IndexerB: Build IndexerRegistration{PeerID_B, Addr_B}
IndexerB->>Native: NewStream /opencloud/native/subscribe/1.0
IndexerB->>Native: json.Encode(IndexerRegistration B)
Native->>Native: Decode → liveIndexerEntry{PeerID_B, Addr_B, ExpiresAt=now+66s}
Native->>DHT: PutValue("/indexer/"+PeerID_B, entry B)
DHT-->>Native: ok
Native->>Native: liveIndexers[PeerID_B] = entry B
Native->>PubSub: topic.Publish([]byte(PeerID_B))
IndexerB->>Native: stream.Close()
end
Note over Native: liveIndexers = {PeerID_A: entryA, PeerID_B: entryB}

View File

@@ -0,0 +1,41 @@
@startuml
title Native Indexer — Enregistrement d'un Indexer auprès du Native
participant "Indexer A" as IndexerA
participant "Indexer B" as IndexerB
participant "Native Indexer (partagé)" as Native
participant "DHT Kademlia" as DHT
participant "GossipSub (oc-indexer-registry)" as PubSub
note over IndexerA,IndexerB: Au démarrage + toutes les 60s (StartNativeRegistration)
par Indexer A s'enregistre
IndexerA -> IndexerA: Build IndexerRegistration{PeerID_A, Addr_A}
IndexerA -> Native: NewStream /opencloud/native/subscribe/1.0
IndexerA -> Native: json.Encode(IndexerRegistration A)
Native -> Native: Decode → liveIndexerEntry{PeerID_A, Addr_A, ExpiresAt=now+66s}
Native -> DHT: PutValue("/indexer/"+PeerID_A, entry A)
DHT --> Native: ok
Native -> Native: liveIndexers[PeerID_A] = entry A
Native -> Native: knownPeerIDs[PeerID_A] = {}
Native -> PubSub: topic.Publish([]byte(PeerID_A))
note over PubSub: Gossipé aux autres Natives\n→ ils ajoutent PeerID_A à knownPeerIDs\n→ refresh DHT au prochain tick 30s
IndexerA -> Native: stream.Close()
else Indexer B s'enregistre
IndexerB -> IndexerB: Build IndexerRegistration{PeerID_B, Addr_B}
IndexerB -> Native: NewStream /opencloud/native/subscribe/1.0
IndexerB -> Native: json.Encode(IndexerRegistration B)
Native -> Native: Decode → liveIndexerEntry{PeerID_B, Addr_B, ExpiresAt=now+66s}
Native -> DHT: PutValue("/indexer/"+PeerID_B, entry B)
DHT --> Native: ok
Native -> Native: liveIndexers[PeerID_B] = entry B
Native -> PubSub: topic.Publish([]byte(PeerID_B))
IndexerB -> Native: stream.Close()
end par
note over Native: liveIndexers = {PeerID_A: entryA, PeerID_B: entryB}
@enduml

View File

@@ -0,0 +1,60 @@
sequenceDiagram
title Native — ConnectToNatives + Consensus (Pair A bootstrap)
participant NodeA as Node A
participant Native1 as Native #1 (primary)
participant Native2 as Native #2
participant NativeN as Native #N
participant DHT as DHT Kademlia
Note over NodeA: NativeIndexerAddresses configuré<br/>Appelé pendant InitNode → ConnectToIndexers
NodeA->>NodeA: Parse NativeIndexerAddresses → StaticNatives
NodeA->>Native1: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
NodeA->>Native2: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
%% Étape 1 : récupérer un pool initial
NodeA->>Native1: Connect + NewStream /opencloud/native/indexers/1.0
NodeA->>Native1: json.Encode(GetIndexersRequest{Count: maxIndexer})
Native1->>Native1: reachableLiveIndexers()
Note over Native1: Filtre liveIndexers par TTL<br/>ping chaque candidat (PeerIsAlive)
alt Aucun indexer connu par Native1
Native1->>Native1: selfDelegate(NodeA.PeerID, resp)
Note over Native1: IsSelfFallback=true<br/>Indexers=[native1 addr]
Native1->>NodeA: GetIndexersResponse{IsSelfFallback:true, Indexers:[native1]}
NodeA->>NodeA: StaticIndexers[native1] = native1
Note over NodeA: Pas de consensus — native1 utilisé directement comme indexeur
else Indexers disponibles
Native1->>NodeA: GetIndexersResponse{Indexers:[Addr_IndexerA, Addr_IndexerB, ...]}
%% Étape 2 : consensus
Note over NodeA: clientSideConsensus(candidates)
par Requêtes consensus parallèles
NodeA->>Native1: NewStream /opencloud/native/consensus/1.0
NodeA->>Native1: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
Native1->>Native1: Croiser avec liveIndexers propres
Native1->>NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
and
NodeA->>Native2: NewStream /opencloud/native/consensus/1.0
NodeA->>Native2: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
Native2->>Native2: Croiser avec liveIndexers propres
Native2->>NodeA: ConsensusResponse{Trusted:[Addr_A], Suggestions:[Addr_C]}
and
NodeA->>NativeN: NewStream /opencloud/native/consensus/1.0
NodeA->>NativeN: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
NativeN->>NativeN: Croiser avec liveIndexers propres
NativeN->>NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
end
Note over NodeA: Aggrège les votes (timeout 4s)<br/>Addr_A → 3/3 votes → confirmé ✓<br/>Addr_B → 2/3 votes → confirmé ✓
alt confirmed < maxIndexer && suggestions disponibles
Note over NodeA: Round 2 — rechallenge avec suggestions
NodeA->>NodeA: clientSideConsensus(confirmed + sample(suggestions))
end
NodeA->>NodeA: StaticIndexers = adresses confirmées à majorité
end

View File

@@ -0,0 +1,62 @@
@startuml
title Native — ConnectToNatives + Consensus (Pair A bootstrap)
participant "Node A" as NodeA
participant "Native #1 (primary)" as Native1
participant "Native #2" as Native2
participant "Native #N" as NativeN
participant "DHT Kademlia" as DHT
note over NodeA: NativeIndexerAddresses configuré\nAppelé pendant InitNode → ConnectToIndexers
NodeA -> NodeA: Parse NativeIndexerAddresses → StaticNatives
NodeA -> Native1: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
NodeA -> Native2: SendHeartbeat /opencloud/heartbeat/1.0 (20s tick)
' Étape 1 : récupérer un pool initial
NodeA -> Native1: Connect + NewStream /opencloud/native/indexers/1.0
NodeA -> Native1: json.Encode(GetIndexersRequest{Count: maxIndexer})
Native1 -> Native1: reachableLiveIndexers()
note over Native1: Filtre liveIndexers par TTL\nping chaque candidat (PeerIsAlive)
alt Aucun indexer connu par Native1
Native1 -> Native1: selfDelegate(NodeA.PeerID, resp)
note over Native1: IsSelfFallback=true\nIndexers=[native1 addr]
Native1 -> NodeA: GetIndexersResponse{IsSelfFallback:true, Indexers:[native1]}
NodeA -> NodeA: StaticIndexers[native1] = native1
note over NodeA: Pas de consensus — native1 utilisé directement comme indexeur
else Indexers disponibles
Native1 -> NodeA: GetIndexersResponse{Indexers:[Addr_IndexerA, Addr_IndexerB, ...]}
' Étape 2 : consensus
note over NodeA: clientSideConsensus(candidates)
par Requêtes consensus parallèles
NodeA -> Native1: NewStream /opencloud/native/consensus/1.0
NodeA -> Native1: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
Native1 -> Native1: Croiser avec liveIndexers propres
Native1 -> NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
else
NodeA -> Native2: NewStream /opencloud/native/consensus/1.0
NodeA -> Native2: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
Native2 -> Native2: Croiser avec liveIndexers propres
Native2 -> NodeA: ConsensusResponse{Trusted:[Addr_A], Suggestions:[Addr_C]}
else
NodeA -> NativeN: NewStream /opencloud/native/consensus/1.0
NodeA -> NativeN: ConsensusRequest{Candidates:[Addr_A, Addr_B]}
NativeN -> NativeN: Croiser avec liveIndexers propres
NativeN -> NodeA: ConsensusResponse{Trusted:[Addr_A, Addr_B], Suggestions:[]}
end par
note over NodeA: Aggrège les votes (timeout 4s)\nAddr_A → 3/3 votes → confirmé ✓\nAddr_B → 2/3 votes → confirmé ✓
alt confirmed < maxIndexer && suggestions disponibles
note over NodeA: Round 2 — rechallenge avec suggestions
NodeA -> NodeA: clientSideConsensus(confirmed + sample(suggestions))
end
NodeA -> NodeA: StaticIndexers = adresses confirmées à majorité
end
@enduml

View File

@@ -0,0 +1,49 @@
sequenceDiagram
title NATS — CREATE_RESOURCE : Pair A découvre Pair B et établit le stream
participant AppA as App Pair A (oc-api)
participant NATSA as NATS A
participant NodeA as Node A
participant StreamA as StreamService A
participant NodeB as Node B
participant StreamB as StreamService B
participant DBA as DB Pair A (oc-lib)
Note over AppA: Pair B vient d'être découvert<br/>(via indexeur ou manuel)
AppA->>NATSA: Publish(CREATE_RESOURCE, {<br/> FromApp:"oc-api",<br/> Datatype:PEER,<br/> Payload: Peer B {StreamAddress_B, Relation:PARTNER}<br/>})
NATSA->>NodeA: ListenNATS callback → CREATE_RESOURCE
NodeA->>NodeA: resp.FromApp == "oc-discovery" ? → Non, continuer
NodeA->>NodeA: json.Unmarshal(payload) → peer.Peer B
NodeA->>NodeA: pp.AddrInfoFromString(B.StreamAddress)
Note over NodeA: ad_B = {ID: PeerID_B, Addrs: [...]}
NodeA->>StreamA: Mu.Lock()
alt peer B.Relation == PARTNER
NodeA->>StreamA: ConnectToPartner(B.StreamAddress)
StreamA->>StreamA: AddrInfoFromString(B.StreamAddress) → ad_B
StreamA->>NodeB: Connect (libp2p)
StreamA->>NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
StreamA->>NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A})
NodeB->>StreamB: HandlePartnerHeartbeat(stream)
StreamB->>StreamB: CheckHeartbeat → bandwidth challenge
StreamB->>StreamA: Echo(payload)
StreamB->>StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
StreamA->>StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
Note over StreamA,StreamB: Stream partner long-lived établi<br/>dans les deux sens
else peer B.Relation != PARTNER (révocation / blacklist)
Note over NodeA: Supprimer tous les streams vers Pair B
loop Pour chaque protocole dans Streams
NodeA->>StreamA: streams[proto][PeerID_B].Stream.Close()
NodeA->>StreamA: delete(streams[proto], PeerID_B)
end
end
NodeA->>StreamA: Mu.Unlock()
NodeA->>DBA: (pas de write direct ici — géré par l'app source)

View File

@@ -0,0 +1,50 @@
@startuml
title NATS — CREATE_RESOURCE : Pair A découvre Pair B et établit le stream
participant "App Pair A (oc-api)" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "StreamService A" as StreamA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "DB Pair A (oc-lib)" as DBA
note over AppA: Pair B vient d'être découvert\n(via indexeur ou manuel)
AppA -> NATSA: Publish(CREATE_RESOURCE, {\n FromApp:"oc-api",\n Datatype:PEER,\n Payload: Peer B {StreamAddress_B, Relation:PARTNER}\n})
NATSA -> NodeA: ListenNATS callback → CREATE_RESOURCE
NodeA -> NodeA: resp.FromApp == "oc-discovery" ? → Non, continuer
NodeA -> NodeA: json.Unmarshal(payload) → peer.Peer B
NodeA -> NodeA: pp.AddrInfoFromString(B.StreamAddress)
note over NodeA: ad_B = {ID: PeerID_B, Addrs: [...]}
NodeA -> StreamA: Mu.Lock()
alt peer B.Relation == PARTNER
NodeA -> StreamA: ConnectToPartner(B.StreamAddress)
StreamA -> StreamA: AddrInfoFromString(B.StreamAddress) → ad_B
StreamA -> NodeB: Connect (libp2p)
StreamA -> NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
StreamA -> NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A})
NodeB -> StreamB: HandlePartnerHeartbeat(stream)
StreamB -> StreamB: CheckHeartbeat → bandwidth challenge
StreamB -> StreamA: Echo(payload)
StreamB -> StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
StreamA -> StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
note over StreamA,StreamB: Stream partner long-lived établi\ndans les deux sens
else peer B.Relation != PARTNER (révocation / blacklist)
note over NodeA: Supprimer tous les streams vers Pair B
loop Pour chaque protocole dans Streams
NodeA -> StreamA: streams[proto][PeerID_B].Stream.Close()
NodeA -> StreamA: delete(streams[proto], PeerID_B)
end
end
NodeA -> StreamA: Mu.Unlock()
NodeA -> DBA: (pas de write direct ici — géré par l'app source)
@enduml

View File

@@ -0,0 +1,66 @@
sequenceDiagram
title NATS — PROPALGATION_EVENT : Pair A propage vers Pair B
participant AppA as App Pair A
participant NATSA as NATS A
participant NodeA as Node A
participant StreamA as StreamService A
participant NodeB as Node B
participant NATSB as NATS B
participant DBB as DB Pair B (oc-lib)
AppA->>NATSA: Publish(PROPALGATION_EVENT, {Action, DataType, Payload})
NATSA->>NodeA: ListenNATS callback → PROPALGATION_EVENT
NodeA->>NodeA: resp.FromApp != "oc-discovery" ? → continuer
NodeA->>NodeA: json.Unmarshal → PropalgationMessage{Action, DataType, Payload}
alt Action == PB_DELETE
NodeA->>StreamA: ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
StreamA->>StreamA: searchPeer(PARTNER) → [Pair B, ...]
StreamA->>NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
Note over NodeB: /opencloud/resource/delete/1.0
NodeB->>NodeB: handleEventFromPartner(evt, ProtocolDeleteResource)
NodeB->>NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
NATSB->>DBB: Supprimer ressource dans DB B
else Action == PB_UPDATE (via ProtocolUpdateResource)
NodeA->>StreamA: ToPartnerPublishEvent(PB_UPDATE, dt, user, payload)
StreamA->>NodeB: write → /opencloud/resource/update/1.0
NodeB->>NATSB: SetNATSPub(CREATE_RESOURCE, {DataType, resource JSON})
NATSB->>DBB: Upsert ressource dans DB B
else Action == PB_CONSIDERS + WORKFLOW_EXECUTION
NodeA->>NodeA: Unmarshal → executionConsidersPayload{PeerIDs:[PeerID_B, ...]}
loop Pour chaque peer_id cible
NodeA->>StreamA: PublishCommon(dt, user, PeerID_B, ProtocolConsidersResource, payload)
StreamA->>NodeB: write → /opencloud/resource/considers/1.0
NodeB->>NodeB: passConsidering(evt)
NodeB->>NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_CONSIDERS, dt, payload})
NATSB->>DBB: (traité par oc-workflow sur NATS B)
end
else Action == PB_PLANNER (broadcast)
NodeA->>NodeA: Unmarshal → {peer_id: nil, ...payload}
loop Pour chaque stream ProtocolSendPlanner ouvert
NodeA->>StreamA: PublishCommon(nil, user, pid, ProtocolSendPlanner, payload)
StreamA->>NodeB: write → /opencloud/resource/planner/1.0
end
else Action == PB_CLOSE_PLANNER
NodeA->>NodeA: Unmarshal → {peer_id: PeerID_B}
NodeA->>StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
NodeA->>StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
else Action == PB_SEARCH + DataType == PEER
NodeA->>NodeA: Unmarshal → {search: "..."}
NodeA->>NodeA: GetPeerRecord(ctx, search)
Note over NodeA: Résolution via DB A + Indexer + DHT
NodeA->>NATSA: SetNATSPub(SEARCH_EVENT, {PEER, PeerRecord JSON})
NATSA->>NATSA: (AppA reçoit le résultat)
else Action == PB_SEARCH + autre DataType
NodeA->>NodeA: Unmarshal → {type:"all"|"known"|"partner", search:"..."}
NodeA->>NodeA: PubSubService.SearchPublishEvent(ctx, dt, type, user, search)
Note over NodeA: Voir diagrammes 10 et 11
end

View File

@@ -0,0 +1,68 @@
@startuml
title NATS — PROPALGATION_EVENT : Pair A propage vers Pair B
participant "App Pair A" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "StreamService A" as StreamA
participant "Node B" as NodeB
participant "NATS B" as NATSB
participant "DB Pair B (oc-lib)" as DBB
AppA -> NATSA: Publish(PROPALGATION_EVENT, {Action, DataType, Payload})
NATSA -> NodeA: ListenNATS callback → PROPALGATION_EVENT
NodeA -> NodeA: resp.FromApp != "oc-discovery" ? → continuer
NodeA -> NodeA: json.Unmarshal → PropalgationMessage{Action, DataType, Payload}
alt Action == PB_DELETE
NodeA -> StreamA: ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
StreamA -> StreamA: searchPeer(PARTNER) → [Pair B, ...]
StreamA -> NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
note over NodeB: /opencloud/resource/delete/1.0
NodeB -> NodeB: handleEventFromPartner(evt, ProtocolDeleteResource)
NodeB -> NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
NATSB -> DBB: Supprimer ressource dans DB B
else Action == PB_UPDATE (via ProtocolUpdateResource)
NodeA -> StreamA: ToPartnerPublishEvent(PB_UPDATE, dt, user, payload)
StreamA -> NodeB: write → /opencloud/resource/update/1.0
NodeB -> NATSB: SetNATSPub(CREATE_RESOURCE, {DataType, resource JSON})
NATSB -> DBB: Upsert ressource dans DB B
else Action == PB_CONSIDERS + WORKFLOW_EXECUTION
NodeA -> NodeA: Unmarshal → executionConsidersPayload{PeerIDs:[PeerID_B, ...]}
loop Pour chaque peer_id cible
NodeA -> StreamA: PublishCommon(dt, user, PeerID_B, ProtocolConsidersResource, payload)
StreamA -> NodeB: write → /opencloud/resource/considers/1.0
NodeB -> NodeB: passConsidering(evt)
NodeB -> NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_CONSIDERS, dt, payload})
NATSB -> DBB: (traité par oc-workflow sur NATS B)
end
else Action == PB_PLANNER (broadcast)
NodeA -> NodeA: Unmarshal → {peer_id: nil, ...payload}
loop Pour chaque stream ProtocolSendPlanner ouvert
NodeA -> StreamA: PublishCommon(nil, user, pid, ProtocolSendPlanner, payload)
StreamA -> NodeB: write → /opencloud/resource/planner/1.0
end
else Action == PB_CLOSE_PLANNER
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B}
NodeA -> StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
NodeA -> StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
else Action == PB_SEARCH + DataType == PEER
NodeA -> NodeA: Unmarshal → {search: "..."}
NodeA -> NodeA: GetPeerRecord(ctx, search)
note over NodeA: Résolution via DB A + Indexer + DHT
NodeA -> NATSA: SetNATSPub(SEARCH_EVENT, {PEER, PeerRecord JSON})
NATSA -> NATSA: (AppA reçoit le résultat)
else Action == PB_SEARCH + autre DataType
NodeA -> NodeA: Unmarshal → {type:"all"|"known"|"partner", search:"..."}
NodeA -> NodeA: PubSubService.SearchPublishEvent(ctx, dt, type, user, search)
note over NodeA: Voir diagrammes 10 et 11
end
@enduml

View File

@@ -0,0 +1,52 @@
sequenceDiagram
title PubSub — Recherche gossip globale (type "all") : Pair A cherche, Pair B répond
participant AppA as App Pair A
participant NATSA as NATS A
participant NodeA as Node A
participant PubSubA as PubSubService A
participant GossipSub as GossipSub libp2p (mesh)
participant NodeB as Node B
participant PubSubB as PubSubService B
participant DBB as DB Pair B (oc-lib)
participant StreamB as StreamService B
participant StreamA as StreamService A
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"all", search:"gpu"})
NATSA->>NodeA: ListenNATS → PB_SEARCH (type "all")
NodeA->>PubSubA: SearchPublishEvent(ctx, dt, "all", user, "gpu")
PubSubA->>PubSubA: publishEvent(PB_SEARCH, user, {search:"gpu"})
PubSubA->>PubSubA: GenerateNodeID() → from = DID_A
PubSubA->>PubSubA: priv_A.Sign(event body) → sig
PubSubA->>PubSubA: Build Event{Type:"search", From:DID_A, Payload:{search:"gpu"}, Sig}
PubSubA->>GossipSub: topic.Join("search")
PubSubA->>GossipSub: topic.Publish(ctx, json(Event))
GossipSub-->>NodeB: Message propagé (gossip mesh)
NodeB->>PubSubB: subscribeEvents écoute topic "search#"
PubSubB->>PubSubB: json.Unmarshal → Event{From: DID_A}
PubSubB->>NodeB: GetPeerRecord(ctx, DID_A)
Note over NodeB: Résolution Pair A via DB B ou Indexer
NodeB-->>PubSubB: Peer A {PublicKey_A, Relation, ...}
PubSubB->>PubSubB: event.Verify(Peer A) → valide sig_A
PubSubB->>PubSubB: handleEventSearch(ctx, evt, PB_SEARCH)
PubSubB->>StreamB: SendResponse(Peer A, evt)
StreamB->>DBB: Search(COMPUTE + STORAGE + ..., filters{creator=self, access=PUBLIC OR partnerships[PeerID_A]}, search="gpu")
DBB-->>StreamB: [Resource1, Resource2, ...]
loop Pour chaque ressource matchée
StreamB->>StreamB: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
StreamB->>StreamA: NewStream /opencloud/resource/search/1.0
StreamB->>StreamA: json.Encode(Event{Type:search, From:DID_B, DataType, Payload:resource})
end
StreamA->>StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
StreamA->>StreamA: retrieveResponse(evt)
StreamA->>NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
NATSA->>AppA: Résultats de recherche de Pair B

View File

@@ -0,0 +1,54 @@
@startuml
title PubSub — Recherche gossip globale (type "all") : Pair A cherche, Pair B répond
participant "App Pair A" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "PubSubService A" as PubSubA
participant "GossipSub libp2p (mesh)" as GossipSub
participant "Node B" as NodeB
participant "PubSubService B" as PubSubB
participant "DB Pair B (oc-lib)" as DBB
participant "StreamService B" as StreamB
participant "StreamService A" as StreamA
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"all", search:"gpu"})
NATSA -> NodeA: ListenNATS → PB_SEARCH (type "all")
NodeA -> PubSubA: SearchPublishEvent(ctx, dt, "all", user, "gpu")
PubSubA -> PubSubA: publishEvent(PB_SEARCH, user, {search:"gpu"})
PubSubA -> PubSubA: GenerateNodeID() → from = DID_A
PubSubA -> PubSubA: priv_A.Sign(event body) → sig
PubSubA -> PubSubA: Build Event{Type:"search", From:DID_A, Payload:{search:"gpu"}, Sig}
PubSubA -> GossipSub: topic.Join("search")
PubSubA -> GossipSub: topic.Publish(ctx, json(Event))
GossipSub --> NodeB: Message propagé (gossip mesh)
NodeB -> PubSubB: subscribeEvents écoute topic "search#"
PubSubB -> PubSubB: json.Unmarshal → Event{From: DID_A}
PubSubB -> NodeB: GetPeerRecord(ctx, DID_A)
note over NodeB: Résolution Pair A via DB B ou Indexer
NodeB --> PubSubB: Peer A {PublicKey_A, Relation, ...}
PubSubB -> PubSubB: event.Verify(Peer A) → valide sig_A
PubSubB -> PubSubB: handleEventSearch(ctx, evt, PB_SEARCH)
PubSubB -> StreamB: SendResponse(Peer A, evt)
StreamB -> DBB: Search(COMPUTE + STORAGE + ..., filters{creator=self, access=PUBLIC OR partnerships[PeerID_A]}, search="gpu")
DBB --> StreamB: [Resource1, Resource2, ...]
loop Pour chaque ressource matchée
StreamB -> StreamB: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
StreamB -> StreamA: NewStream /opencloud/resource/search/1.0
StreamB -> StreamA: json.Encode(Event{Type:search, From:DID_B, DataType, Payload:resource})
end
StreamA -> StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
StreamA -> StreamA: retrieveResponse(evt)
StreamA -> NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
NATSA -> AppA: Résultats de recherche de Pair B
@enduml

View File

@@ -0,0 +1,52 @@
sequenceDiagram
title Stream — Recherche directe (type "known"/"partner") : Pair A → Pair B
participant AppA as App Pair A
participant NATSA as NATS A
participant NodeA as Node A
participant PubSubA as PubSubService A
participant StreamA as StreamService A
participant DBA as DB Pair A (oc-lib)
participant NodeB as Node B
participant StreamB as StreamService B
participant DBB as DB Pair B (oc-lib)
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"partner", search:"gpu"})
NATSA->>NodeA: ListenNATS → PB_SEARCH (type "partner")
NodeA->>PubSubA: SearchPublishEvent(ctx, dt, "partner", user, "gpu")
PubSubA->>StreamA: SearchPartnersPublishEvent(dt, user, "gpu")
StreamA->>DBA: Search(PEER, PARTNER) + PeerIDS config
DBA-->>StreamA: [Peer B, ...]
loop Pour chaque pair partenaire (Pair B)
StreamA->>StreamA: json.Marshal({search:"gpu"}) → payload
StreamA->>StreamA: write(PeerID_B, addr_B, dt, user, payload, ProtocolSearchResource)
StreamA->>NodeB: TempStream /opencloud/resource/search/1.0
StreamA->>NodeB: json.Encode(Event{Type:search, From:DID_A, DataType, Payload:{search:"gpu"}})
NodeB->>StreamB: HandleResponse(stream) → readLoop
StreamB->>StreamB: handleEvent(ProtocolSearchResource, evt)
StreamB->>StreamB: handleEventFromPartner(evt, ProtocolSearchResource)
alt evt.DataType == -1 (toutes ressources)
StreamB->>DBA: Search(PEER, evt.From=DID_A)
Note over StreamB: Résolution locale ou via GetPeerRecord
StreamB->>StreamB: SendResponse(Peer A, evt)
StreamB->>DBB: Search(ALL_RESOURCES, filter{creator=B + public OR partner A + search:"gpu"})
DBB-->>StreamB: [Resource1, Resource2, ...]
else evt.DataType spécifié
StreamB->>DBB: Search(DataType, filter{creator=B + access + search:"gpu"})
DBB-->>StreamB: [Resource1, ...]
end
loop Pour chaque ressource
StreamB->>StreamA: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
StreamA->>StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
StreamA->>StreamA: retrieveResponse(evt)
StreamA->>NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
NATSA->>AppA: Résultat de Pair B
end
end
Note over NATSA,DBA: Optionnel: App A persiste<br/>les ressources découvertes dans DB A

View File

@@ -0,0 +1,54 @@
@startuml
title Stream — Recherche directe (type "known"/"partner") : Pair A → Pair B
participant "App Pair A" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "PubSubService A" as PubSubA
participant "StreamService A" as StreamA
participant "DB Pair A (oc-lib)" as DBA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "DB Pair B (oc-lib)" as DBB
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_SEARCH, type:"partner", search:"gpu"})
NATSA -> NodeA: ListenNATS → PB_SEARCH (type "partner")
NodeA -> PubSubA: SearchPublishEvent(ctx, dt, "partner", user, "gpu")
PubSubA -> StreamA: SearchPartnersPublishEvent(dt, user, "gpu")
StreamA -> DBA: Search(PEER, PARTNER) + PeerIDS config
DBA --> StreamA: [Peer B, ...]
loop Pour chaque pair partenaire (Pair B)
StreamA -> StreamA: json.Marshal({search:"gpu"}) → payload
StreamA -> StreamA: write(PeerID_B, addr_B, dt, user, payload, ProtocolSearchResource)
StreamA -> NodeB: TempStream /opencloud/resource/search/1.0
StreamA -> NodeB: json.Encode(Event{Type:search, From:DID_A, DataType, Payload:{search:"gpu"}})
NodeB -> StreamB: HandleResponse(stream) → readLoop
StreamB -> StreamB: handleEvent(ProtocolSearchResource, evt)
StreamB -> StreamB: handleEventFromPartner(evt, ProtocolSearchResource)
alt evt.DataType == -1 (toutes ressources)
StreamB -> DBA: Search(PEER, evt.From=DID_A)
note over StreamB: Résolution locale ou via GetPeerRecord
StreamB -> StreamB: SendResponse(Peer A, evt)
StreamB -> DBB: Search(ALL_RESOURCES, filter{creator=B + public OR partner A + search:"gpu"})
DBB --> StreamB: [Resource1, Resource2, ...]
else evt.DataType spécifié
StreamB -> DBB: Search(DataType, filter{creator=B + access + search:"gpu"})
DBB --> StreamB: [Resource1, ...]
end
loop Pour chaque ressource
StreamB -> StreamA: write(PeerID_A, addr_A, dt, resource JSON, ProtocolSearchResource)
StreamA -> StreamA: readLoop → handleEvent(ProtocolSearchResource, evt)
StreamA -> StreamA: retrieveResponse(evt)
StreamA -> NATSA: SetNATSPub(SEARCH_EVENT, {DataType, resource JSON})
NATSA -> AppA: Résultat de Pair B
end
end
note over NATSA,DBA: Optionnel: App A persiste\nles ressources découvertes dans DB A
@enduml

View File

@@ -0,0 +1,58 @@
sequenceDiagram
title Stream — Partner Heartbeat et propagation CRUD Pair A ↔ Pair B
participant DBA as DB Pair A (oc-lib)
participant StreamA as StreamService A
participant NodeA as Node A
participant NodeB as Node B
participant StreamB as StreamService B
participant NATSB as NATS B
participant DBB as DB Pair B (oc-lib)
participant NATSA as NATS A
Note over StreamA: Démarrage → connectToPartners()
StreamA->>DBA: Search(PEER, PARTNER) + PeerIDS config
DBA-->>StreamA: [Peer B, ...]
StreamA->>NodeB: Connect (libp2p)
StreamA->>NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
StreamA->>NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A, IndexersBinded_A})
NodeB->>StreamB: HandlePartnerHeartbeat(stream)
StreamB->>StreamB: CheckHeartbeat → bandwidth challenge
StreamB->>StreamA: Echo(payload)
StreamB->>StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
StreamA->>StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
Note over StreamA,StreamB: Stream partner long-lived établi<br/>GC toutes les 8s (StreamService A)<br/>GC toutes les 30s (StreamService B)
Note over NATSA: Pair A reçoit PROPALGATION_EVENT{PB_DELETE, dt:"storage", payload:res}
NATSA->>NodeA: ListenNATS → ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
NodeA->>StreamA: ToPartnerPublishEvent(ctx, PB_DELETE, dt_storage, user, payload)
alt dt == PEER (mise à jour relation partenaire)
StreamA->>StreamA: json.Unmarshal → peer.Peer B updated
alt B.Relation == PARTNER
StreamA->>NodeB: ConnectToPartner(B.StreamAddress)
Note over StreamA,NodeB: Reconnexion heartbeat si relation upgrade
else B.Relation != PARTNER
loop Tous les protocoles
StreamA->>StreamA: delete(streams[proto][PeerID_B])
StreamA->>NodeB: (streams fermés)
end
end
else dt != PEER (ressource ordinaire)
StreamA->>DBA: Search(PEER, PARTNER) → [Pair B, ...]
loop Pour chaque protocole partner (Create/Update/Delete)
StreamA->>NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
Note over NodeB: /opencloud/resource/delete/1.0
NodeB->>StreamB: HandleResponse → readLoop
StreamB->>StreamB: handleEventFromPartner(evt, ProtocolDeleteResource)
StreamB->>NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
NATSB->>DBB: Supprimer ressource dans DB B
end
end

View File

@@ -0,0 +1,60 @@
@startuml
title Stream — Partner Heartbeat et propagation CRUD Pair A ↔ Pair B
participant "DB Pair A (oc-lib)" as DBA
participant "StreamService A" as StreamA
participant "Node A" as NodeA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "NATS B" as NATSB
participant "DB Pair B (oc-lib)" as DBB
participant "NATS A" as NATSA
note over StreamA: Démarrage → connectToPartners()
StreamA -> DBA: Search(PEER, PARTNER) + PeerIDS config
DBA --> StreamA: [Peer B, ...]
StreamA -> NodeB: Connect (libp2p)
StreamA -> NodeB: NewStream /opencloud/resource/heartbeat/partner/1.0
StreamA -> NodeB: json.Encode(Heartbeat{Name_A, DID_A, PeerID_A, IndexersBinded_A})
NodeB -> StreamB: HandlePartnerHeartbeat(stream)
StreamB -> StreamB: CheckHeartbeat → bandwidth challenge
StreamB -> StreamA: Echo(payload)
StreamB -> StreamB: streams[ProtocolHeartbeatPartner][PeerID_A] = {DID_A, Expiry=now+10s}
StreamA -> StreamA: streams[ProtocolHeartbeatPartner][PeerID_B] = {DID_B, Expiry=now+10s}
note over StreamA,StreamB: Stream partner long-lived établi\nGC toutes les 8s (StreamService A)\nGC toutes les 30s (StreamService B)
note over NATSA: Pair A reçoit PROPALGATION_EVENT{PB_DELETE, dt:"storage", payload:res}
NATSA -> NodeA: ListenNATS → ToPartnerPublishEvent(PB_DELETE, dt, user, payload)
NodeA -> StreamA: ToPartnerPublishEvent(ctx, PB_DELETE, dt_storage, user, payload)
alt dt == PEER (mise à jour relation partenaire)
StreamA -> StreamA: json.Unmarshal → peer.Peer B updated
alt B.Relation == PARTNER
StreamA -> NodeB: ConnectToPartner(B.StreamAddress)
note over StreamA,NodeB: Reconnexion heartbeat si relation upgrade
else B.Relation != PARTNER
loop Tous les protocoles
StreamA -> StreamA: delete(streams[proto][PeerID_B])
StreamA -> NodeB: (streams fermés)
end
end
else dt != PEER (ressource ordinaire)
StreamA -> DBA: Search(PEER, PARTNER) → [Pair B, ...]
loop Pour chaque protocole partner (Create/Update/Delete)
StreamA -> NodeB: write(PeerID_B, addr_B, dt, user, payload, ProtocolDeleteResource)
note over NodeB: /opencloud/resource/delete/1.0
NodeB -> StreamB: HandleResponse → readLoop
StreamB -> StreamB: handleEventFromPartner(evt, ProtocolDeleteResource)
StreamB -> NATSB: SetNATSPub(REMOVE_RESOURCE, {DataType, resource JSON})
NATSB -> DBB: Supprimer ressource dans DB B
end
end
@enduml

View File

@@ -0,0 +1,49 @@
sequenceDiagram
title Stream — Session Planner : Pair A demande le plan de Pair B
participant AppA as App Pair A (oc-booking)
participant NATSA as NATS A
participant NodeA as Node A
participant StreamA as StreamService A
participant NodeB as Node B
participant StreamB as StreamService B
participant DBB as DB Pair B (oc-lib)
participant NATSB as NATS B
%% Ouverture session planner
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_PLANNER, peer_id:PeerID_B, payload:{}})
NATSA->>NodeA: ListenNATS → PB_PLANNER
NodeA->>NodeA: Unmarshal → {peer_id: PeerID_B, payload: {}}
NodeA->>StreamA: PublishCommon(nil, user, PeerID_B, ProtocolSendPlanner, {})
Note over StreamA: WaitResponse=true, TTL=24h<br/>Stream long-lived vers Pair B
StreamA->>NodeB: TempStream /opencloud/resource/planner/1.0
StreamA->>NodeB: json.Encode(Event{Type:planner, From:DID_A, Payload:{}})
NodeB->>StreamB: HandleResponse → readLoop(ProtocolSendPlanner)
StreamB->>StreamB: handleEvent(ProtocolSendPlanner, evt)
StreamB->>StreamB: sendPlanner(evt)
alt evt.Payload vide (requête initiale)
StreamB->>DBB: planner.GenerateShallow(AdminRequest)
DBB-->>StreamB: plan (shallow booking plan de Pair B)
StreamB->>StreamA: PublishCommon(nil, user, DID_A, ProtocolSendPlanner, planJSON)
StreamA->>NodeA: json.Encode(Event{plan de B})
NodeA->>NATSA: (forwardé à AppA via SEARCH_EVENT ou PLANNER event)
NATSA->>AppA: Plan de Pair B
else evt.Payload non vide (mise à jour planner)
StreamB->>StreamB: m["peer_id"] = evt.From (DID_A)
StreamB->>NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_PLANNER, peer_id:DID_A, payload:plan})
NATSB->>DBB: (oc-booking traite le plan sur NATS B)
end
%% Fermeture session planner
AppA->>NATSA: Publish(PROPALGATION_EVENT, {PB_CLOSE_PLANNER, peer_id:PeerID_B})
NATSA->>NodeA: ListenNATS → PB_CLOSE_PLANNER
NodeA->>NodeA: Unmarshal → {peer_id: PeerID_B}
NodeA->>StreamA: Mu.Lock()
NodeA->>StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
NodeA->>StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
NodeA->>StreamA: Mu.Unlock()
Note over StreamA,NodeB: Stream planner fermé — session terminée

View File

@@ -0,0 +1,51 @@
@startuml
title Stream — Session Planner : Pair A demande le plan de Pair B
participant "App Pair A (oc-booking)" as AppA
participant "NATS A" as NATSA
participant "Node A" as NodeA
participant "StreamService A" as StreamA
participant "Node B" as NodeB
participant "StreamService B" as StreamB
participant "DB Pair B (oc-lib)" as DBB
participant "NATS B" as NATSB
' Ouverture session planner
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_PLANNER, peer_id:PeerID_B, payload:{}})
NATSA -> NodeA: ListenNATS → PB_PLANNER
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B, payload: {}}
NodeA -> StreamA: PublishCommon(nil, user, PeerID_B, ProtocolSendPlanner, {})
note over StreamA: WaitResponse=true, TTL=24h\nStream long-lived vers Pair B
StreamA -> NodeB: TempStream /opencloud/resource/planner/1.0
StreamA -> NodeB: json.Encode(Event{Type:planner, From:DID_A, Payload:{}})
NodeB -> StreamB: HandleResponse → readLoop(ProtocolSendPlanner)
StreamB -> StreamB: handleEvent(ProtocolSendPlanner, evt)
StreamB -> StreamB: sendPlanner(evt)
alt evt.Payload vide (requête initiale)
StreamB -> DBB: planner.GenerateShallow(AdminRequest)
DBB --> StreamB: plan (shallow booking plan de Pair B)
StreamB -> StreamA: PublishCommon(nil, user, DID_A, ProtocolSendPlanner, planJSON)
StreamA -> NodeA: json.Encode(Event{plan de B})
NodeA -> NATSA: (forwardé à AppA via SEARCH_EVENT ou PLANNER event)
NATSA -> AppA: Plan de Pair B
else evt.Payload non vide (mise à jour planner)
StreamB -> StreamB: m["peer_id"] = evt.From (DID_A)
StreamB -> NATSB: SetNATSPub(PROPALGATION_EVENT, {PB_PLANNER, peer_id:DID_A, payload:plan})
NATSB -> DBB: (oc-booking traite le plan sur NATS B)
end
' Fermeture session planner
AppA -> NATSA: Publish(PROPALGATION_EVENT, {PB_CLOSE_PLANNER, peer_id:PeerID_B})
NATSA -> NodeA: ListenNATS → PB_CLOSE_PLANNER
NodeA -> NodeA: Unmarshal → {peer_id: PeerID_B}
NodeA -> StreamA: Mu.Lock()
NodeA -> StreamA: Streams[ProtocolSendPlanner][PeerID_B].Stream.Close()
NodeA -> StreamA: delete(Streams[ProtocolSendPlanner], PeerID_B)
NodeA -> StreamA: Mu.Unlock()
note over StreamA,NodeB: Stream planner fermé — session terminée
@enduml

View File

@@ -0,0 +1,59 @@
sequenceDiagram
title Native Indexer — Boucles background (offload, DHT refresh, GC streams)
participant IndexerA as Indexer A (enregistré)
participant IndexerB as Indexer B (enregistré)
participant Native as Native Indexer
participant DHT as DHT Kademlia
participant NodeA as Node A (responsible peer)
Note over Native: runOffloadLoop — toutes les 30s
loop Toutes les 30s
Native->>Native: len(responsiblePeers) > 0 ?
Note over Native: responsiblePeers = peers pour lesquels<br/>le native a fait selfDelegate (aucun indexer dispo)
alt Des responsible peers existent (ex: Node A)
Native->>Native: reachableLiveIndexers()
Note over Native: Filtre liveIndexers par TTL<br/>ping PeerIsAlive pour chaque candidat
alt Indexers A et B maintenant joignables
Native->>Native: responsiblePeers = {} (libère Node A et autres)
Note over Native: Node A se reconnectera<br/>au prochain ConnectToNatives
else Toujours aucun indexer
Note over Native: Node A reste sous la responsabilité du native
end
end
end
Note over Native: refreshIndexersFromDHT — toutes les 30s
loop Toutes les 30s
Native->>Native: Collecter tous les knownPeerIDs<br/>= {PeerID_A, PeerID_B, ...}
loop Pour chaque PeerID connu
Native->>Native: liveIndexers[PeerID] encore frais ?
alt Entrée manquante ou expirée
Native->>DHT: SearchValue(ctx 5s, "/indexer/"+PeerID)
DHT-->>Native: channel de bytes
loop Pour chaque résultat DHT
Native->>Native: Unmarshal → liveIndexerEntry
Native->>Native: Garder le meilleur (ExpiresAt le plus récent, valide)
end
Native->>Native: liveIndexers[PeerID] = best entry
Note over Native: "native: refreshed indexer from DHT"
end
end
end
Note over Native: LongLivedStreamRecordedService GC — toutes les 30s
loop Toutes les 30s
Native->>Native: gc() — lock StreamRecords[Heartbeat]
loop Pour chaque StreamRecord (Indexer A, B, ...)
Native->>Native: now > rec.Expiry ?<br/>OU timeSince(LastSeen) > 2×TTL restant ?
alt Pair périmé (ex: Indexer B disparu)
Native->>Native: Supprimer Indexer B de TOUS les maps de protocoles
Note over Native: Stream heartbeat fermé<br/>liveIndexers[PeerID_B] expirera naturellement
end
end
end
Note over IndexerA: Indexer A continue à heartbeater normalement<br/>et reste dans StreamRecords + liveIndexers

View File

@@ -0,0 +1,61 @@
@startuml
title Native Indexer — Boucles background (offload, DHT refresh, GC streams)
participant "Indexer A (enregistré)" as IndexerA
participant "Indexer B (enregistré)" as IndexerB
participant "Native Indexer" as Native
participant "DHT Kademlia" as DHT
participant "Node A (responsible peer)" as NodeA
note over Native: runOffloadLoop — toutes les 30s
loop Toutes les 30s
Native -> Native: len(responsiblePeers) > 0 ?
note over Native: responsiblePeers = peers pour lesquels\nle native a fait selfDelegate (aucun indexer dispo)
alt Des responsible peers existent (ex: Node A)
Native -> Native: reachableLiveIndexers()
note over Native: Filtre liveIndexers par TTL\nping PeerIsAlive pour chaque candidat
alt Indexers A et B maintenant joignables
Native -> Native: responsiblePeers = {} (libère Node A et autres)
note over Native: Node A se reconnectera\nau prochain ConnectToNatives
else Toujours aucun indexer
note over Native: Node A reste sous la responsabilité du native
end
end
end
note over Native: refreshIndexersFromDHT — toutes les 30s
loop Toutes les 30s
Native -> Native: Collecter tous les knownPeerIDs\n= {PeerID_A, PeerID_B, ...}
loop Pour chaque PeerID connu
Native -> Native: liveIndexers[PeerID] encore frais ?
alt Entrée manquante ou expirée
Native -> DHT: SearchValue(ctx 5s, "/indexer/"+PeerID)
DHT --> Native: channel de bytes
loop Pour chaque résultat DHT
Native -> Native: Unmarshal → liveIndexerEntry
Native -> Native: Garder le meilleur (ExpiresAt le plus récent, valide)
end
Native -> Native: liveIndexers[PeerID] = best entry
note over Native: "native: refreshed indexer from DHT"
end
end
end
note over Native: LongLivedStreamRecordedService GC — toutes les 30s
loop Toutes les 30s
Native -> Native: gc() — lock StreamRecords[Heartbeat]
loop Pour chaque StreamRecord (Indexer A, B, ...)
Native -> Native: now > rec.Expiry ?\nOU timeSince(LastSeen) > 2×TTL restant ?
alt Pair périmé (ex: Indexer B disparu)
Native -> Native: Supprimer Indexer B de TOUS les maps de protocoles
note over Native: Stream heartbeat fermé\nliveIndexers[PeerID_B] expirera naturellement
end
end
end
note over IndexerA: Indexer A continue à heartbeater normalement\net reste dans StreamRecords + liveIndexers
@enduml

43
docs/diagrams/README.md Normal file
View File

@@ -0,0 +1,43 @@
# OC-Discovery — Diagrammes de séquence
Tous les fichiers `.mmd` sont au format [Mermaid](https://mermaid.js.org/).
Rendu possible via VS Code (extension Mermaid Preview), IntelliJ, ou [mermaid.live](https://mermaid.live).
## Vue d'ensemble des diagrammes
| Fichier | Description |
|---------|-------------|
| `01_node_init.mmd` | Initialisation complète d'un Node (libp2p host, GossipSub, indexers, StreamService, PubSubService, NATS) |
| `02_node_claim.mmd` | Enregistrement du nœud auprès des indexeurs (`claimInfo` + `publishPeerRecord`) |
| `03_indexer_heartbeat.mmd` | Protocole heartbeat avec calcul du score qualité (bande passante, uptime, diversité) |
| `04_indexer_publish.mmd` | Publication d'un `PeerRecord` vers l'indexeur → DHT |
| `05_indexer_get.mmd` | Résolution d'un pair via l'indexeur (`GetPeerRecord` + `handleNodeGet` + DHT) |
| `06_native_registration.mmd` | Enregistrement d'un indexeur auprès d'un Native Indexer + gossip PubSub |
| `07_native_get_consensus.mmd` | `ConnectToNatives` : pool d'indexeurs + protocole de consensus (vote majoritaire) |
| `08_nats_create_resource.mmd` | Handler NATS `CREATE_RESOURCE` : connexion/déconnexion d'un partner |
| `09_nats_propagation.mmd` | Handler NATS `PROPALGATION_EVENT` : delete, considers, planner, search |
| `10_pubsub_search.mmd` | Recherche gossip globale (type `"all"`) via GossipSub |
| `11_stream_search.mmd` | Recherche directe par stream (type `"known"` ou `"partner"`) |
| `12_partner_heartbeat.mmd` | Heartbeat partner + propagation CRUD vers les partenaires |
| `13_planner_flow.mmd` | Session planner (ouverture, échange, fermeture) |
| `14_native_offload_gc.mmd` | Boucles background du Native Indexer (offload, DHT refresh, GC) |
## Protocoles libp2p utilisés
| Protocole | Description |
|-----------|-------------|
| `/opencloud/heartbeat/1.0` | Heartbeat node → indexeur (long-lived) |
| `/opencloud/heartbeat/indexer/1.0` | Heartbeat indexeur → native (long-lived) |
| `/opencloud/resource/heartbeat/partner/1.0` | Heartbeat node ↔ partner (long-lived) |
| `/opencloud/record/publish/1.0` | Publication `PeerRecord` vers indexeur |
| `/opencloud/record/get/1.0` | Requête `GetPeerRecord` vers indexeur |
| `/opencloud/native/subscribe/1.0` | Enregistrement indexeur auprès du native |
| `/opencloud/native/indexers/1.0` | Requête de pool d'indexeurs au native |
| `/opencloud/native/consensus/1.0` | Validation de pool d'indexeurs (consensus) |
| `/opencloud/resource/search/1.0` | Recherche de ressources entre peers |
| `/opencloud/resource/create/1.0` | Propagation création ressource vers partner |
| `/opencloud/resource/update/1.0` | Propagation mise à jour ressource vers partner |
| `/opencloud/resource/delete/1.0` | Propagation suppression ressource vers partner |
| `/opencloud/resource/planner/1.0` | Session planner (booking) |
| `/opencloud/resource/verify/1.0` | Vérification signature ressource |
| `/opencloud/resource/considers/1.0` | Transmission d'un "considers" d'exécution |