Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stream_file: make appending:// URLs more useful #15998

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

MoSal
Copy link
Contributor

@MoSal MoSal commented Mar 3, 2025

By exposing the two parameters that control the timeout as options.

--appending-max-retries replaces the MAX_RETRIES constant. And
--appending-retry-timeout replaces the RETRY-TIMEOUT.

The options have the same default values as the removed constants. So
nothing should change behavior wise.

Copy link

github-actions bot commented Mar 3, 2025

Download the artifacts for this pull request:

Windows
macOS

@MoSal
Copy link
Contributor Author

MoSal commented Mar 4, 2025

Addressed all suggestions.

@kasper93
Copy link
Contributor

kasper93 commented Mar 4, 2025

Also needs DOCS/interface-changes entry.

#define OPT_BASE_STRUCT struct appending_opts
const struct m_sub_options stream_appending_conf = {
.opts = (const struct m_option[]) {
{"retry-timeout", OPT_DOUBLE(retry_timeout), M_RANGE(0.1, 1.0)},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This range seems a bit tight? any reason for that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well because there are multiple retries.

however having both of these options seems needlessly confusing. it would work perfectly fine to have just retry-timeout and then automatically the code automatically re-checks every 0.2s

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It goes against mpv core principles where everything is customization. Though given how small is the range to customize this variable, it makes not much difference in practice, no?

it would work perfectly fine to have just retry-timeout and then automatically the code automatically re-checks every 0.2s

You meant max-retries?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You meant max-retries?

well no. then the unit of the option would be 0.2s, which is confusing.
the user should specify which total timeout he wants and then mpv retries as it sees fit (could even be linear backoff).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding exposing retry_timeout and it's range.

My thought was that a lower value than 0.2 may help allowing uninterrupted playback in cases where a stream is being appended to very often. Think an unsegmented live stream that is being saved as a file, or a live stream with very short segments.

On the upper bound, my thought was allowing for less resource usage if the user knows appending doesn't happen with high frequency, and doesn't mind the delay between retries, which is maybe less convincing, but still a nice to have IMHO.

I think both options should be exposed in any case.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will let @sfan5 decide on this one. For me it is fine to have those options exposed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the user is interested in micro-optimizing resource usage he should use a pipe, which does not require polling.
I stand by my suggestion to only have one option.

@MoSal MoSal force-pushed the appending_opts branch 2 times, most recently from 4d53658 to 330f7b8 Compare March 4, 2025 14:15
@MoSal
Copy link
Contributor Author

MoSal commented Mar 4, 2025

Made opts live-update, which theoretically adds synchronization overhead. But I think it's fine.

@kasper93
Copy link
Contributor

kasper93 commented Mar 4, 2025

Made opts live-update, which theoretically adds synchronization overhead. But I think it's fine.

I would rather avoid this in main file reading function.

  By exposing the two parameters that control the *timeout* as options.

  `--appending-max-retries` replaces the `MAX_RETRIES` constant. And
  `--appending-retry-timeout` replaces the `RETRY-TIMEOUT`.

  The options have the same default values as the removed constants. So
  nothing should change behavior wise.

Signed-off-by: Mohammad AlSaleh <CE.Mohammad.AlSaleh@gmail.com>
@MoSal
Copy link
Contributor Author

MoSal commented Mar 4, 2025

I would rather avoid this in main file reading function.

Reverted.

As for not exposing retry_timeout, I wouldn't use this PR without it, and I'm not going to push for an upstream change I won't be using as-is.

@sfan5
Copy link
Member

sfan5 commented Mar 4, 2025

What does your use case look like?

@MoSal
Copy link
Contributor Author

MoSal commented Mar 4, 2025

What does your use case look like?

Just an example, playing a live MPEG-TS stream that is being saved to disk with the fastest update/lowest delay possible, but with seek and slice:// support too. I need and use generated slice:// URLs in my own mpv scripts a lot (based on some properties like stream-pos, so piping wouldn't work.

@sfan5
Copy link
Member

sfan5 commented Mar 7, 2025

I see why a pipe wouldn't work for your use case. In that case adding these options should be fine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants