merge#515
Closed
ErfanDL wants to merge 13 commits intoruvnet:dependabot/cargo/v2/ruvector-temporal-tensor-2.0.6from
Closed
merge#515ErfanDL wants to merge 13 commits intoruvnet:dependabot/cargo/v2/ruvector-temporal-tensor-2.0.6from
ErfanDL wants to merge 13 commits intoruvnet:dependabot/cargo/v2/ruvector-temporal-tensor-2.0.6from
Conversation
- Move Latest Additions, Key Features, and everything from Installation through Changelog (1855 lines) into docs/readme-details.md. - Keep README focused on overview, capability table, How It Works, Use Cases, Documentation, License, and Support. - Add per-row emojis to the top capability table. - Add 3D point cloud row noting optional camera + WiFi CSI + mmWave fusion with link to the live viewer demo. - Move Documentation table closer to the bottom (just above License). - Collapse Edge Intelligence (ADR-041) into a <details> block matching the sibling Use Case sections. Co-Authored-By: claude-flow <ruv@ruv.net>
…94) (#495) Publishes the live 3D point cloud viewer to gh-pages/pointcloud/ so it can be linked from the README alongside the Observatory and Dual-Modal Pose Fusion demos. The viewer auto-selects its transport from URL parameters: - default / ?backend=auto — try /api/splats, fall back to synthetic demo - ?backend=demo — synthetic in-browser only, no network - ?backend=<url> — fetch from a CORS-permitting host running ruview-pointcloud serve - ?live=1 — strict mode, show offline panel instead of demo fallback The synthetic frame matches the live API JSON shape (splats, count, frame, live, pipeline.{skeleton,vitals}) so a single render path drives both modes. New workflow uses keep_files: true to preserve the existing observatory/, pose-fusion/, and nvsim/ deployments on gh-pages. See docs/adr/ADR-094-pointcloud-github-pages-deployment.md for the full decision record and 6 acceptance gates.
Now that ADR-094 is deployed, point the README's demo link at https://ruvnet.github.io/RuView/pointcloud/ instead of the docs/readme-details.md anchor. Matches the pattern of the sibling Observatory and Pose Fusion demo links. Co-Authored-By: claude-flow <ruv@ruv.net>
The previous synthetic procedural demo did not represent what the local fusion pipeline produces — a real depth-backprojected point cloud of the user's face and surroundings. This commit ports the closest browser equivalent: MediaPipe Face Mesh runs in-browser at ~30 fps and emits 478 3D landmarks per frame. Each visitor now sees the outline of their own face rendered as a point cloud, with a small floor + back wall for spatial context. - Adds MediaPipe Face Mesh + Camera Utils via jsdelivr CDN. - Adds an "▶ Enable camera" CTA so getUserMedia is gated on a user gesture (required by some browsers and good UX regardless). - New face-mesh frame generator uses the same splat shape as the live /api/splats payload, so a single render path drives both modes. - Mirrors x to match selfie convention; maps lm.z (relative depth) to the world-coord range used by the live pipeline. - Falls back automatically to the procedural floor + walls + figure when the camera is denied, dismissed, or unavailable. - Badge surfaces the new state: '● DEMO Your Face (MediaPipe)'. - Bumps poll cadence to 4 Hz so face mesh updates feel live. - ADR-094 updated to reflect the new default behavior. Co-Authored-By: claude-flow <ruv@ruv.net>
…aesthetic
Three fixes in one pass to address visitor feedback:
1. Face was rendering upside down — MediaPipe's lm.y is image-down (0=top
of frame, 1=bottom) and the existing updateSplats() already does a
y-negate to convert to Three.js Y-up. Pre-flipping in lmToCenter was a
double flip. Use lm.y directly so the renderer's single flip lands the
head at the top of the screen.
2. Density and fidelity — interpolate 6 splats per FACEMESH_TESSELATION
edge (~1300 edges → ~8000 face splats vs 478 vertex-only). Amplify
lm.z mapping (×8 vs ×4) so eye sockets, nose, and chin show real 3D
depth. Smaller splat scale (0.006 surface, 0.010 vertices) for finer
point appearance.
3. Foundation-inspired aesthetic — the demo now renders the subject
(face mesh OR procedural fallback) inside a Hari Seldon time-vault:
* Holographic surveyor grid in amber, breathing brightness pattern.
* Slow-rotating two-arm galactic spiral receding behind the subject
(~640 stars, warm core to cool edges, Trantor-evocation).
* 800-star deterministic distant starfield on a spherical shell
(fixed LCG seed so visitors don't see noise flicker).
* 60-particle holographic halo orbiting the subject plane.
Shared pushFoundationContext() drives both face-mesh and synthetic
paths. Synthetic procedural figure densified 4x (240 vs 60 points)
and re-oriented (head→top, feet→bottom) so the y-down convention is
internally consistent.
Camera pulled back to (0, 0.2, -3.5) to frame the galactic context.
Poll cadence 4 Hz → 10 Hz so the spiral animates smoothly. Info panel
gets a Seldon quote and "Seldon Vault" branding. CTA copy reframed to
"Project Subject — render your face into the Vault".
ADR-094 already documents the dual-transport intent; the aesthetic
choices here are content, not architecture, so no ADR update needed.
Co-Authored-By: claude-flow <ruv@ruv.net>
… line Adds optional cinematic effects to the face-mesh demo, all toggleable via a new ?fx= URL param. Default is 'all' (texture + mesh + scan + halo). Lightweight modes available: ?fx=clean (texture only) or ?fx=points (original solid amber). - Texture: per-frame webcam → hidden 2D canvas → getImageData lookup at each landmark (and each interpolated edge sample). Splats now carry the visitor's actual skin tone, not solid amber. Sampling is mirrored on x to match the selfie convention used by the face mesh vertex placement. All on-device — no frames leave the browser. - Mesh: persistent THREE.LineSegments overlay drawn from FACEMESH_TESSELATION (~1300 edges). Translucent (opacity 0.35), amber, additive blending, depthWrite off — gives a holographic wireframe wrapping the point cloud. Geometry is updated in place each frame; only positions get re-uploaded. - Scan: vertical bright slab sweeps top→bottom every 4 seconds, amplifying splat color up to 2.6× when within ±0.08 world units of the line. Westworld-style scanning. - Halo: existing 60-particle ring around the face is now opt-in via FX_HALO. Cleaner default for the texture-mesh combination. Info panel surfaces active fx list in face-mesh mode. Synthetic fallback hides the wireframe overlay so it doesn't render against an empty figure. Workflow README updated with the new ?fx= options. Co-Authored-By: claude-flow <ruv@ruv.net>
…me, scan line" This reverts commit 347ad4b.
When the viewer is hosted on a static origin (GitHub Pages, S3) it has no backend at /api/splats. The default ?backend=auto path was issuing a fetch every 100 ms, getting a 404, falling back to the demo, and flooding the console with one 404 per tick. Cosmetic on the surface but real network/CPU waste over time. After the first 404 in auto mode, set networkDisabled=true and skip fetch on subsequent ticks — the interval still fires but goes straight to pickDemoFrame() so the face mesh / synthetic render path keeps animating. Remote (?backend=<url>) and live (?live=1) modes keep retrying so a transient outage doesn't permanently downgrade them. Co-Authored-By: claude-flow <ruv@ruv.net>
Browsers auto-request /favicon.ico when none is declared in <head>. On a static GitHub Pages host that's a guaranteed 404 in the console. Inline a 32x32 SVG amber dot via data: URL so the browser is satisfied without an extra network round-trip. Co-Authored-By: claude-flow <ruv@ruv.net>
…sted viewer The hosted GitHub Pages viewer can now act as a thin client for a locally-running ruview-pointcloud serve instance — flip a button, the ESP32's CSI fusion (camera depth + WiFi CSI + mmWave) renders inside the same Three.js scene that previously only showed the face mesh demo. No clone, no rebuild, no toolchain on the visitor's side. Server (stream.rs): - Add tower_http::cors::CorsLayer with a deliberate allowlist: https://ruvnet.github.io, http://localhost:*, http://127.0.0.1:*, and 'null' (for file:// origins). Anything else is denied — not a wildcard CORS. Modern browsers (Chrome 94+, Firefox 116+, Safari 16.4+) treat 127.0.0.1 as a "potentially trustworthy" origin so HTTPS Pages → HTTP loopback is permitted. The new layer wraps the existing /api/cloud, /api/splats, /api/status, /health routes. - Cargo.toml: pull in workspace tower-http (cors feature already on). Viewer: - New "📡 Connect ESP32…" CTA bottom-right. Clicking prompts for a ruview-pointcloud serve URL (default http://127.0.0.1:9880), persists the last-used value in localStorage, and reloads with ?backend=<url> so the existing remote-mode fetch path takes over. When already connected the button toggles to "disconnect" and reloads back to the demo. - Reuses the existing transport selector — no new code path to maintain. The face mesh / synthetic demo render path is unaffected; this is purely an additive UI affordance over the ?backend= query. Docs: - ADR-094 §2.3 expanded with the local-ESP32 workflow and the CORS posture rationale. - Workflow README documents ?backend=http://127.0.0.1:9880 as the intended local-ESP32 path. Tests: cargo test -p wifi-densepose-pointcloud → 15/15 passed. Co-Authored-By: claude-flow <ruv@ruv.net>
Lets the visitor enable their browser webcam face mesh in addition to (not instead of) a connected ESP32 backend. Both render in the same Three.js scene — the live ESP32-driven splats from /api/splats plus the visitor's own face as a 478-vertex MediaPipe point cloud. Use cases: - Local development: see your face overlaid on the camera+CSI fusion output to debug coordinate-frame alignment. - Demos: show 'this is the room as ESP32 sees it, and this is me as MediaPipe sees me' side-by-side in one scene. Implementation: - Extract pushFaceSplats(splats) — pushes the 478 face vertices plus ~8000 edge-interpolated samples into the array, with no Foundation context. Reused by faceMeshFrame (demo path) and handleData (overlay path) so there is one source of truth for face-splat geometry. - handleData now appends pushFaceSplats output to data.splats when the source is not 'face-mesh' AND the user has clicked the camera CTA. Sets data._faceOverlay so the badge can show '+ face overlay'. - Camera CTA is no longer hidden in remote/live modes — it relabels to '▶ Add face overlay' so the affordance is clear. Strict-live mode (?live=1) still hides it because the offline panel takes over. - Splat count in the info panel reflects the rendered total (backend + overlay) when the overlay is active. Co-Authored-By: claude-flow <ruv@ruv.net>
…banner When ?backend=<url> pointed at a server that wasn't running (e.g. user forgot to start ruview-pointcloud serve before clicking Connect ESP32), the viewer was retrying 10 Hz forever — flooding the console with ERR_CONNECTION_REFUSED and offering no guidance about what was wrong. Two fixes: 1. Replace setInterval(fetchCloud, 100) with self-rescheduling setTimeout. On success: 250 ms steady cadence. On failure for an explicit backend: 250 ms → 500 → 1 s → 2 s → 4 s → 8 s → 16 s → capped at 30 s. Resets to 250 ms the moment the backend comes back. Auto mode (Pages with no backend) still disables network entirely after the first 404. Strict-live mode (?live=1) also backs off so it doesn't spam. 2. Show an actionable status banner in the info panel when the chosen backend is unreachable: the URL, the actual error string, the next retry time, and the exact `cargo run` command to start the server. Visitor sees the diagnosis instead of staring at a 'demo' badge wondering why their ESP32 feed isn't visible. The scene keeps animating (face mesh / synthetic) while the viewer waits, so the tab never goes blank. Co-Authored-By: claude-flow <ruv@ruv.net>
Added Spatial Intelligence to readme, since that seems to be a common description
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
No description provided.