Skip to content

Data tracks support#647

Open
ladvoc wants to merge 8 commits intomainfrom
ladvoc/data-tracks
Open

Data tracks support#647
ladvoc wants to merge 8 commits intomainfrom
ladvoc/data-tracks

Conversation

@ladvoc
Copy link
Copy Markdown
Contributor

@ladvoc ladvoc commented Apr 8, 2026

Resolves BOT-281

@ladvoc ladvoc requested a review from lukasIO April 8, 2026 01:23
@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Apr 8, 2026

⚠️ No Changeset found

Latest commit: 6994cc9

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

*
* @throws {@link DataTrackPushFrameError} If the push fails.
*/
tryPush(frame: DataTrackFrame): void {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thought(non-blocking): I left a similar comment on the JS implementation, it seems a bit counter-intuitive that a try* method can throw.
Although I don't know if there's a good alternative here. If users aren't expected to gain additional insight from the error's message maybe returning a "success" boolean from the method would be sufficient?

Copy link
Copy Markdown
Contributor Author

@ladvoc ladvoc Apr 9, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In contrast to the web implementation, the error reasons here might be more useful and worth exposing. Under the hood, this sends the frame to a dedicated task (one for each track) for processing (E2EE, compression in a future release, etc.) and packetization before sending. If the channel used for sending frames to this task fills up (i.e., in the case the user is pushing too fast), they would get an error here.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense, my slight reservation about it being called try* in that case remains

case 'eos': {
this.dispose();
if (event.detail.value.error) {
controller.error(new Error(event.detail.value.error));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion(non-blocking): maybe nice to have a more specific (tagged) error here

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's definitely something we can do here—the Protobuf now has enums that map to the corresponding error cases from Rust. However, I wasn't sure the nicest way to expose this in Node and if we want to adopt the new error handling patterns from the JS SDK.

},
});

return new ReadableStream<DataTrackFrame>(new DataTrackStreamSource(res.stream!), {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: should we be worried about the fact that each subscribe call pushes the same frames across the FFI boundary each time?
Not sure if there's a good way to re-use an existing source.
We could also think about teeing the stream but that comes with it's own pitfalls when reading at different rates in different places.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely something to consider, and something that also applies to Python, C++, and Unity. If there are multiple subscriptions, you are right that the same frames will need to be serialized/deserialized more than once. I wonder if the performance overhead from this is enough to justify implementing a teeing solution for each FFI client—this would be more performant at the cost of adding additional complexity to each FFI client implementation. Currently, teeing is implemented on the Rust side so each FFI client gets that behavior automatically.

@ladvoc ladvoc marked this pull request as ready for review April 9, 2026 06:06
Copy link
Copy Markdown

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 4 additional findings.

Open in Devin Review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants