-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: introduce App::run_return
#12668
base: dev
Are you sure you want to change the base?
feat: introduce App::run_return
#12668
Conversation
Package Changes Through ba70a4fThere are 9 changes which include tauri-cli with minor, tauri-runtime with minor, tauri-runtime-wry with minor, tauri-utils with minor, tauri with minor, @tauri-apps/api with minor, @tauri-apps/cli with minor, tauri-bundler with patch, tauri-macos-sign with minor Planned Package VersionsThe following package releases are the planned based on the context of changes in this pull request.
Add another change file through the GitHub UI by following this link. Read about change files or the docs at github.com/jbolda/covector |
Do we want to deprecate |
5b41a90
to
e48e8c3
Compare
crates/tauri-runtime-wry/src/lib.rs
Outdated
@@ -2840,13 +2843,13 @@ impl<T: UserEvent> Runtime<T> for Wry<T> { | |||
let active_tracing_spans = self.context.main_thread.active_tracing_spans.clone(); | |||
let proxy = self.event_loop.create_proxy(); | |||
|
|||
self.event_loop.run(move |event, event_loop, control_flow| { | |||
self.event_loop.run_return(move |e, event_loop, cf| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I renamed these variables here so that this stays on one line. That way, the git diff shows nicely that run_return
is effectively what run
used to be.
crates/tauri/src/app.rs
Outdated
pub fn run_return<F: FnMut(&AppHandle<R>, RunEvent) + 'static>( | ||
mut self, | ||
mut callback: F, | ||
) -> std::result::Result<i32, Box<dyn std::error::Error>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've just used the same error as from the setup callback. It is missing Send + Sync + 'static
bounds unfortunately but that is already the case and can't be changed now I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tried it locally, and crate::Result<i32>
works for me.
imo yes |
iirc run_return also works on android (though probably good to test that first), only iOS should be a problem. I'm wondering whether we should really go for the cfg flag you used or just try to do what winit for example does and just document that it doesn't return ever on iOS (using tao's run instead of run_return for iOS internally). didn't look too much into the code yet so idk how feasible that is. |
Removing the cfg is a semver-compatible change so we can always do that later? I'd prefer an incremental approach if possible! :)
Deciding on what the mobile story is here seems like a different problem to me that I'd rather not tackle, also because I don't know the internals of tao and Tauri well enough :) |
|
What can be done to push this forward? I really want to see this feature in |
Unfortunately, the repository settings prevent me from effectively iterating on this because CI has to be approved constantly and running |
Wasn't as much work as I thought it would be (wuhu AI!) so iterating locally now. |
Should be compiling now. Some of the tests also fail on |
Yey all green! |
crates/tauri/src/app.rs
Outdated
pub fn run_return<F: FnMut(&AppHandle<R>, RunEvent) + 'static>( | ||
mut self, | ||
mut callback: F, | ||
) -> std::result::Result<i32, Box<dyn std::error::Error>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tried it locally, and crate::Result<i32>
works for me.
@FabianLars @WSH032 Ready for another review round. I've reverted the change of returning Refactoring this can be left to a future PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've reverted the change of returning Result from run_return
I have no opinion on this. I just think that if run_return
does return a Result
, I would prefer it to be tauri::Result<T>
(tauri::Error::Setup
) rather than std::result::Result<T, Box<dyn Error>>
.
Regarding the lifecycle, we should look at @FabianLars's opinion.
I agree. Unfortunately, returning a |
I think this is possible but haven't looked at the code of all platforms. It is more involved than what I'd like to do at the moment. |
It is more correct* and i'd consider it a bug in run_iteration that it didn't do that as well.
(take this with a grain of salt. i'm on my phone so basically blind) What has the callback to do with that? As far as i can see you'd just return from run_return where rn there's a panic completely ignoring the callback. The event loop should get stopped simply by dropping it (which we do by returning) I don't really care about the error type we return, i'd just like to get rid of the panic here tbh. If this is not possible, then the function's documentation should mention the panic just in case. |
3db3088
to
d521c2e
Compare
I opened #12815 because I think there are a few more things to discuss here in trying to get rid of that panic. |
@FabianLars I rebased on |
i just deleted the draft i wrote 2 hours ago that i forgot to send 👏 tldr: looks good, any improvements in follow up prs will probably result in breaking changes but that's fine 🤷 |
I thought about this. I think we could still make a backwards-compatible change by returning an exit-code of -1 in case there is an error in the setup and just log it on ERROR or WARN level. |
I've been testing this integration in our application and it works quite well: firezone/firezone#7959. I think it might be worth showing this in a more complex example. The basic idea is:
In our case, we don't actually use the return value from |
just a fyi, i'll have to keep this pr open/unmerged for a few more days. I wanted to include it in 2.3.0 but that was just released (i only was told once ci was running 🤷 ) and we'll need to delay minor changes a bit to be able to quickly release new patches if something went wrong in 2.3.0 |
Okay no worries :) Thanks for the reviews! Looking forward to it being released! |
Just wanted to chime in and say this PR fixes the busy main loop problem for me in Linux. Thanks! I'm looking forward to seeing it (maybe) in the 2.4 release. Though there's a side-effect of not being able to use |
Ok, after using the new After trying a bunch of things to poke Tauri into closing the windows before exiting the event loop, I may have found why the cleanup isn't happening. My best guess is that cleaning up the windows requires sending events, and if we exit the event loop too early, those events never get processed? Here's some code I've figured out that successfully manages to close the windows before the event loop exits, but it requires giving the event loop some extra time to do ... something before allowing it to stop. fn try_main(args: Args) -> Result<Code,Error> {
// start the tokio runtime
tokio::runtime::Builder::new_multi_thread()
.enable_time()
.build()
.wrap_err()
.context(ctx!("Failed to start tokio runtime"))?
.block_on(async {
tauri::async_runtime::set(tokio::runtime::Handle::current());
let state = AppStateContainer::new(args);
debug!("Starting tauri ...");
// run the tauri app
let exit = tauri::Builder::default()
.manage(state.clone())
.invoke_handler(tauri::generate_handler![
status
])
.setup(|app| setup(app).map_err(Into::into))
.build(tauri::generate_context!())
.wrap_err()
.context(ctx!("Failed to build tauri app"))?
.run_return({
let mut exiting = false;
move |app, event| match event {
tauri::RunEvent::ExitRequested { api, .. } => {
// if we've already been here before, just let the event loop exit
if exiting {
return;
}
exiting = true;
// otherwise, prevent the event loop from exiting
api.prevent_exit();
// then hide all the windows
for win in app.webview_windows().values() {
match win.hide() {
Ok(()) => info!("Hid window {}!", win.label()),
Err(e) => warn!("Failed to hide window {}: {}", win.label(), e)
}
}
// and set a task to call exit again later,
// to give the event loop time to actually hide the windows
tokio::task::spawn({
let app = app.clone();
async move {
debug!("Waiting for event loop to hide things ...");
tokio::time::sleep(std::time::Duration::from_millis(200))
.await;
debug!("Wait complete: burn it down!");
app.exit(0)
// DANGER WILL ROBINSON: if app.exit() fails internally, it will call std::process::exit() !!!
}
});
}
_ => () // ignore other events
}
});
debug!("Tauri finished, cleaning up state ...");
// clean up the state
state.close()
.await?;
debug!("Finished");
Ok(Code::new(exit))
})
} This code seems to function as a workaround, but it's not ideal, because I'm not sure how to use this info to propose a change to the way Tauri works, but the solution here seems to be allowing the event loop to process the event queue before actually exiting. Maybe that can help someone who knows Tauri better to come up with a fix? |
In my usecase, I am calling |
I guess we could extend the internal "cleanup before exit" handler perhaps? |
In my case, I'm triggering "exit" by clicking the close icon on the last window. I'm not sure how to make that happen earlier or later. But after the window closes, and before the process can exit, I need to make sure other resources get cleaned up. Like the |
I tried poking around the Tauri code a bit to see if I could find a solution. Maybe I have something? This re-implementation of #[cfg(desktop)]
fn run_return<F: FnMut(RunEvent<T>) + 'static>(mut self, callback: F) -> i32 {
use tao::platform::run_return::EventLoopExtRunReturn;
let event_handler = make_event_handler(&self, callback);
let exit = self.event_loop.run_return(event_handler);
// after the event loop is ready to exit, run one more iteration to dispatch any cleanup messages
// otherwise, the windows will become orphaned from the Rust code and remain visible until the process exits
self.run_iteration(|_| {});
exit
} This seems to work in my case. Is this an ok thing to do in general? If not, I don't really know why it works, but maybe someone else knows a better way to get the cleanup to finish out. |
We want to deprecate I am happy for other people to jump in here, I am not gonna get to work on this for a few days. |
I finally found a way to do async state cleanup while Tauri's event loop is still running, so it's no longer necessary to rely on Tauri's internal cleanup to finish successfully. I'll share the workaround here for posterity, in case it might help anyone else: fn main() {
// start the tokio runtime
tokio::runtime::Builder::new_multi_thread()
.build()
.unwrap()
.block_on(async {
tauri::async_runtime::set(tokio::runtime::Handle::current());
// create our state
let state = MyState::new();
// run the tauri app
tauri::Builder::default()
.manage(state)
.invoke_handler(...)
.setup(...)
.build(...)
.unwrap()
.run({
let mut exiting = false;
move |app, event| match event {
tauri::RunEvent::ExitRequested { api, .. } => {
// if we've already been here before, just let the event loop exit
if exiting {
return;
}
exiting = true;
// don't exit just yet: still need to cleanup
api.prevent_exit();
// start a task to cleanup the state, since we need async
tokio::task::spawn({
let app = app.clone();
async move {
// get a reference to our state from Tauri
let state = app.state::<MyState>()
.deref()
.clone();
// clean up the state
state.close()
.await
.unwrap();
// ask tauri event loop to exit again
app.exit(0)
}
});
}
_ => ()
}
});
})
} Error handling and Tauri configuration were omitted, for brevity. This pattern should work on the release version of Tauri too. |
In our case, the equivalent of your I am almost certain we can achieve the same thing with some modifications to the internals of |
The current
App::run_iteration
function is buggy because it busy-loops. To address the use-case of cleaning up resources in the host program, we introduceApp::run_return
which builds on top oftao
'sEventloop::run_return
.Related: #8631.