-
Notifications
You must be signed in to change notification settings - Fork 43
[nexus] Add service to DB, add rack initialization API #1169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -72,6 +74,35 @@ CREATE TABLE omicron.public.sled ( | |||
last_used_address INET NOT NULL | |||
); | |||
|
|||
/* | |||
* Services |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this PR, this table isn't actually being read, but it can be written.
I expect this will be the future format Nexus uses to identify "what services are running where, and where should I boot new services".
// This is a no-op, since we conflicted on the ID. | ||
.set(dsl::id.eq(excluded(dsl::id))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#1168 for an explanation of why I'm not calling .do_nothing()
here. Doing so would cause the test I added to fail.
method = PUT, | ||
path = "/racks/{rack_id}/initialization_complete", | ||
}] | ||
async fn rack_initialization_complete( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the API I expect RSS to invoke.
I'll submit the call to it in a follow-up PR, but wanted this one to be focused on the Nexus side of things.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me make sure I understand the sequencing. RSS configures a bunch of services to run on various sleds, among which is nexus
. This represents the handshake between RSS and nexus
, at which point nexus
takes over control of the rack, is that right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Relatedly, what happens if this fails? It occurs to me this line of questioning is more appropriate on the follow-up PR you mention, so feel free to defer :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot of detail on this handshake in RFD 278, if you want more context. But yeah, that description is accurate.
In the future, I expect Nexus to refuse to serve the external interface until this method succeeds - so if it fails, the internal API would remain accessible, but the external API would stay offline. I think this scenario would require operator intervention, but would be very obvious as it would happen during rack setup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to spend some time with diesel's docs, but this all LGTM
@@ -21,6 +23,7 @@ impl super::Nexus { | |||
pub(crate) fn as_rack(&self) -> db::model::Rack { | |||
db::model::Rack { | |||
identity: self.api_rack_identity.clone(), | |||
initialized: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Out of curiosity, why is this true
? (Not suggesting it's wrong, I just don't have context for this method)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The answer is kinda handwavey, but TL;DR: "At the moment, it doesn't actually matter".
- This
as_rack
method is being used because all references to thedb::model::Rack
object are synthesized - it's pulling it out of memory, not the DB. - In a follow-up PR, I would like Nexus to read a UUID for the rack from a config file, and insert it if it does not exist (with the
initialized
value set tofalse
) - Then, I'd like Nexus to read the value for
db::model::Rack
from the DB, and to remove this method altogether.
nexus/src/app/rack.rs
Outdated
// | ||
// 1. There should only be a single RSS process executing at once. | ||
// 2. Once the RSS propagates information about services to Nexus, | ||
// they should be recorded to a local plan that does not change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RSS records its plan locally before propagating it to nexus, right? If it happened after, I'd have a question about whether a sequence like this were possible:
- RSS determines a plan
- RSS sends the plan to nexus
- Something goes wrong partway through
- RSS starts over and determines a different plan
where step 3 left bogus services from the initial plan in the db. Is there a downside to putting this in a transaction now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As discussed on the RFD: https://github.com/oxidecomputer/rfd/pull/434#discussion_r888200485 if the call here failed, I think we'd re-try using the same plan.
That being said, I went ahead and moved all these calls into a transaction - this is a low-frequency call; it's probably worth the cost to not worry about safety issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work Sean, as usual! A few questions, and some small fixups I think, but nothing structural.
Thanks for addressing those questions. Looks good to me, once those fixes are pushed! |
Crucible changes: Per client, queue-based backpressure (#1186) A builder for the Downstairs Downstairs struct. (#1152) Update Rust to v1.76.0 (#1153) Deactivate the read only parent after a scrub (#1180) Start byte-based backpressure earlier (#1179) Tweak CI scripts to fix warnings (#1178) Make `gw_ds_complete` less public (#1175) Verify extent under repair is valid after copying files (#1159) Remove individual panic setup, use global panic settings (#1174) [smf] Use new zone network config service (#1096) Move a few methods into downstairs (#1160) Remove extra clone in upstairs read (#1163) Make `crucible-downstairs` not depend on upstairs (#1165) Update Rust crate rusqlite to 0.31 (#1171) Update Rust crate reedline to 0.29.0 (#1170) Update Rust crate clap to 4.5 (#1169) Update Rust crate indicatif to 0.17.8 (#1168) Update progenitor to bc0bb4b (#1164) Do not 500 on snapshot delete for deleted region (#1162) Drop jobs from Offline downstairs. (#1157) `Mutex<Work>` → `Work` (#1156) Added a contributing.md (#1158) Remove ExtentFlushClose::source_downstairs (#1154) Remove unnecessary mutexes from Downstairs (#1132) Propolis changes: PHD: improve Windows reliability (#651) Update progenitor and omicron deps Clean up VMM resource on server shutdown Remove Inventory mechanism Update vergen dependency Properly handle pre/post illumos#16183 fixups PHD: add `pfexec` to xtask phd-runner invocation (#647) PHD: add Windows Server 2016 adapter & improve WS2016/2019 reliability (#646) PHD: use `clap` for more `cargo xtask phd` args (#645) PHD: several `cargo xtask phd` CLI fixes (#642) PHD: Use ZFS clones for file-backed disks (#640) PHD: improve ctrl-c handling (#634)
Crucible changes: Per client, queue-based backpressure (#1186) A builder for the Downstairs Downstairs struct. (#1152) Update Rust to v1.76.0 (#1153) Deactivate the read only parent after a scrub (#1180) Start byte-based backpressure earlier (#1179) Tweak CI scripts to fix warnings (#1178) Make `gw_ds_complete` less public (#1175) Verify extent under repair is valid after copying files (#1159) Remove individual panic setup, use global panic settings (#1174) [smf] Use new zone network config service (#1096) Move a few methods into downstairs (#1160) Remove extra clone in upstairs read (#1163) Make `crucible-downstairs` not depend on upstairs (#1165) Update Rust crate rusqlite to 0.31 (#1171) Update Rust crate reedline to 0.29.0 (#1170) Update Rust crate clap to 4.5 (#1169) Update Rust crate indicatif to 0.17.8 (#1168) Update progenitor to bc0bb4b (#1164) Do not 500 on snapshot delete for deleted region (#1162) Drop jobs from Offline downstairs. (#1157) `Mutex<Work>` → `Work` (#1156) Added a contributing.md (#1158) Remove ExtentFlushClose::source_downstairs (#1154) Remove unnecessary mutexes from Downstairs (#1132) Propolis changes: PHD: improve Windows reliability (#651) Update progenitor and omicron deps Clean up VMM resource on server shutdown Remove Inventory mechanism Update vergen dependency Properly handle pre/post illumos#16183 fixups PHD: add `pfexec` to xtask phd-runner invocation (#647) PHD: add Windows Server 2016 adapter & improve WS2016/2019 reliability (#646) PHD: use `clap` for more `cargo xtask phd` args (#645) PHD: several `cargo xtask phd` CLI fixes (#642) PHD: Use ZFS clones for file-backed disks (#640) PHD: improve ctrl-c handling (#634) Co-authored-by: Alan Hanson <alan@oxide.computer>
This PR implements the "internal API" from RFD 278 which will be invoked by RSS to initialize the rack. Currently, this API has no callers, and nothing is blocked on the value of
rack.initialized
being set.Fixes #1149