-
Notifications
You must be signed in to change notification settings - Fork 291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow boundary nodes to move in the Laplace Mesh Smoother #3399
base: devel
Are you sure you want to change the base?
Conversation
56bc7a3
to
1e1c25e
Compare
Job Test MOOSE mac on 1e1c25e : invalidated by @roystgnr Kicking a mac CI intermittent Civet problem there |
1e1c25e
to
bdf5bc0
Compare
bdf5bc0
to
b55cd94
Compare
Needs some more testing. |
3a0c628
to
2711276
Compare
Hm, there is a small diff on one node in parallel. That happens for movable boundary nodes an non-movable boundary nodes. It seems to me that the Laplace smoother wasn't tested before either (maybe in MOOSE). I'm trying to debug this. Is there a way to run a single unit test? The libmesh tests take quite long. |
In other infuriating news. The new unit test isn't even run when I build and make check locally. Yet it is run on Civet. That makes debugging pretty tedious. |
You can run the unit_tests-opt executable directly and pass --re Foo on the command line to run only tests with names matching a regular expression. |
I kind of like
Probably you've got a build without CppUnit? When you |
2711276
to
cf06771
Compare
The unit test failed in parallel until I skipped partitioning. In MOOSE the same test (calling the smoother through the MOOSE API) never failed in parallel. I cannot see the difference though. MOOSE does not skip partitioning. Is there a communication step I'm missing? |
Moose module failure is unrelated |
Does it fail repeatedly in parallel in the unit test, or intermittently? Does it fail in every parallel CI recipe, or just some subset (e.g. DistributedMesh) of them? |
Job Test MOOSE mac on cf06771 : invalidated by @roystgnr Stochastic Tools failed stochastically let''s see if we can get the whole board green |
Looks like all your unit test invocations are hardcoding I'd be very surprised if the mesh smoother classes worked on a truly distributed mesh; that's all older code, and I don't recall anyone ever updating it to work distributed. We might need to throw a |
Yeah, same deal with ReplicatedMesh. That's what I had in there at first. I can easily double those tests up to check both. That's why it's templated. I'm assuming that without the partitioning both mesh types should behave the same? |
All parallel unit tests consistently. The stochastic tools test failure is also - ironically - deterministic. There was a PR to fix that failure (not sure it it percolated through the branches yet). Update: it seems like it has. |
It's possible to see differences even without partitioning, depending on how the meshes are generated. For any mesh generation code that adds new unpartitioned objects but doesn't impose an explicit numbering, the implicitly assigned numbering ends up being sequential on ReplicatedMesh (because every rank knows what every other rank is doing, so that's safe) but taking strides of length (Nproc+1) on DistributedMesh (because even if my rank is only adding unpartitioned-hence-serialized objects, I have to leave some space available for my neighbors to also add new local objects that I can't see, if they wish). I haven't looked in detail at your code or at the code you're modifying, but mistakenly assuming that the maximum elem id == the number of elements is a common bug that tend to get tripped in DistributedMesh runs more easily than in ReplicatedMesh runs, for that reason. But if it's not the replicated/distributed difference making the distinction, then I don't know what's going on that could have MOOSE runs working and unit tests failing. |
Is anything else needed here? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any pictures of the 3D test cases?
I also think it would be good if you could add some documentation about the checks you are doing in the has_zero_curvature()
function.
Those are harder to visualize, but check out the exodus gold files. |
Oh dear, hang on... I think something went horribly wrong... |
Can you elaborate? I guess you will let us know when this is ready again by dropping the Draft status on the PR... |
I made some screenshots of the 3D case. Since the input mesh is a cube with all flat sides, I would not expect the mesh smoother which allows boundary node mesh movement to be "allowed" to move any boundary nodes that appear on the "edges/corners" of the cube, only those that live on the "faces" of the boundary, and then only in the planes defined by those faces. I guess the problem is that the "curvature" here is determined to be zero at the edge/corner nodes, so they are allowed to move? So I would say this is not really working in the 3D case yet... what are your thoughts? |
Sorry to vanish on you last week. My one concern is the failures on partitioned meshes. Running on a distributed mesh is kind of a special use case, but running on a partitioned mesh is a requirement for most users, no? And if we don't know why it's failing there, then can we even be confident that it's just a bug with the new feature, not a regression of previous smoother behavior too? |
The "oh dear" was my reaction to visualizing the 3D result, which is obviously broken, but worked intermediately. Then I changed something and mindlessly regolded :-(. I was on leave last week and will get back to this. |
Hm, I think I'd better scale back the scope of this PR to 2D only, if that is acceptable. The 3D case is a bit more complicated than what I anticipated. To do this right there have to be an additional condition which is node sliding along edges if all connecting elements are "flat". |
This is work in progress, but I want to check if I'm already breaking stuff before continuing.
Closes #3385