Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weak form of a PDE #174

Open
LoveFrootLoops opened this issue Aug 3, 2023 · 14 comments
Open

Weak form of a PDE #174

LoveFrootLoops opened this issue Aug 3, 2023 · 14 comments
Labels
enhancement New feature or request low priority Low priority fix tutorials Improvements or additions to tutorials

Comments

@LoveFrootLoops
Copy link

LoveFrootLoops commented Aug 3, 2023

Hi again :)

I'd like to bring up a topic that's been keeping me busy lately in the field of Physics-Informed Neural Networks (PINNS). During research I've come across a number of papers that discuss how accurate and effective PINNS are, particularly when dealing with the strong form of a Partial Differential Equation (PDE). Interestingly, these papers reveal that in most cases PINNS don't perform well and might even give us incorrect results. However, they also highlight that PINNS work really well when it comes to weak forms of PDEs.

Just as an example take a look at page 6 of Paper1

Another example can be seen here Paper2 where they had to add an integration formulation of the pde to the strong form to keep the global consistency.

In light of these observations, I would like to propose the incorporation of an integration method. Such an addition would enable the formulation of the loss function in weak form, facilitating integration across the entire domain (effectively summing over collection points). By doing this, you could really enhance the performance of PINNS, bringing them up to date with the latest techniques, particularly in accurately solving PDEs using Neural Networks.

@LoveFrootLoops LoveFrootLoops added the enhancement New feature or request label Aug 3, 2023
@dario-coscia dario-coscia added the v0.2 implementation in v0.2 label Aug 4, 2023
@dario-coscia
Copy link
Collaborator

Hey @LoveFrootLoops thank you for the suggestion. With the maintainers we are thinking to include variational losses (weak formulation), but we are still not sure how is the best way to do it.

In the meanwhile, consider that we made a LossInterface class so the user can create a loss by inheriting from it. By using torch integration schemas (look at this nice library torchquad), and by defining some basis test functions for your problem I think you should already be able to train by using a weak formulation.

@dario-coscia
Copy link
Collaborator

Hey @LoveFrootLoops did you manage to work it out?😊

@dario-coscia dario-coscia added the low priority Low priority fix label Sep 21, 2023
@dario-coscia
Copy link
Collaborator

Hello again @LoveFrootLoops, at the end did you manage to work it out? It would be fantastic to have a tutorial on it! We can chat about it, let me know :)

@LoveFrootLoops
Copy link
Author

Hi @dario-coscia,

I haven’t progressed with the project since I found it challenging to keep up with the constant updates to the packages. Each time I attempted to update to a newer version, I encountered issues that prevented anything from working properly. Consequently, I’ve put a hold on using it until it stabilizes. I would appreciate an update from you regarding the version you’d recommend and whether it’s currently stable.

Regarding the implementation, I have been considering a potential solution. Given that the training process is batch-oriented, one possible approach could be to modify the on_epoch_end function to calculate the integral. This would align the calculation with the completion of an epoch rather than on a per-batch basis, which could be a more feasible integration point for the functionality I need.

@dario-coscia
Copy link
Collaborator

dario-coscia commented Nov 5, 2023

Hi @LoveFrootLoops we plan to release the v0.1 version this November as stable version, I will update you when the release is ready. We made new documentation with more details, updated all tutorials and examples, and introduced the Neural Operator learning framework (MIONet, DeepOnet, Fourier Neural Operator, and more to come..). Also, we plan to make a mailing list for University users, Students, and developers where we will put all the (minor) updates in the package.

I will work out after the release to provide a tutorial on weak PINN. After looking in more details on VPINN and WPINN, the best is to define a new solver inheriting from PINN, where in the __init__ constructor you choose the basis functions you want. Then you will just need to overwrite the loss_phys function in PINN and everything should work (see #195 for more details, soon it will be merged). The loss_phys is a method for computing the physical loss, in our case the weak loss (integration can be done by quadrature rules using torchquad, or by a simple sum).

@LoveFrootLoops
Copy link
Author

@dario-coscia, that sounds impressive. Well done!

I do have a query about the loss_phys function. Is it assessed solely at a batch of collocation points? It so, how would you evaluate an integral over the complete domain?

@dario-coscia
Copy link
Collaborator

dario-coscia commented Nov 5, 2023

@dario-coscia, that sounds impressive. Well done!

I do have a query about the loss_phys function. Is it assessed solely at a batch of collocation points? It so, how would you evaluate an integral over the complete domain?

It depends on the batch_size you use. If you use batch_size=None in the SamplePointLoader you will evaluate the integral over all points used during training. Otherwise, points are batched, and loss_phys is assessed solely at a batch of collocation points (as you correctly said).

@LoveFrootLoops
Copy link
Author

@dario-coscia, I believe there might be an issue because you're looking to evaluate the local losses at batches of collocation points, whereas the integral losses need to be evaluated over the entire domain.

Just as a recommendation, I would distinguish between local and global losses. Global losses can be evaluated at each epoch end (on_epoch_end) while local losses should be trained for batches i.e. training steps in the epoch (training_step).

@dario-coscia
Copy link
Collaborator

Hi @LoveFrootLoops we merged the new version as well as making a complete new documentation.

I do not understand what you are saying. If batch size is None all points sampled for training are used to evaluate the loss. In that case 1batch iteration = 1epoch iteration.

Anyway, would you like to start a draft PR in which we can collaborate to make up a simple tutorial using VPINN or WPINN? This way I can help you with the problem directly on the code. Let me know

@dario-coscia
Copy link
Collaborator

Hi @LoveFrootLoops 👋🏻 and happy new year! Just checking if you managed the implementation of the integral loss function using torchquad and the on_epoch_end method of the callback. If you need extra help let us know, also in case you want something to be implemented lets post it here since we plan to add integral losses soon.

@LoveFrootLoops
Copy link
Author

Hi @LoveFrootLoops 👋🏻 and happy new year! Just checking if you managed the implementation of the integral loss function using torchquad and the on_epoch_end method of the callback. If you need extra help let us know, also in case you want something to be implemented lets post it here since we plan to add integral losses soon.

Hi @dario-coscia,

Happy New Year! Just wanted to let you know that I got the integral working with torchquad. I'm thinking of setting up a benchmark problem, but things are a bit hectic right now. Should have some time around mid-February to sort that out.

About the PyTorch Lightning thing – yeah, it's all about batches and training steps. We'll need to figure out how to handle all the collocation points in each batch with something like a domain_points variable. Also, there's a bit of a juggle to make sure the loss sizes match up, since we'll have residuals for the batch size and others for all the domain points.

@dario-coscia dario-coscia added tutorials Improvements or additions to tutorials and removed v0.2 implementation in v0.2 labels Feb 6, 2024
@dario-coscia
Copy link
Collaborator

Wow! That's super you managed to do it 🚀🚀

Yeah, it would be great to do a tutorial for the integral loss using torchquad on a simple benchmark problem (I added the label to the PR). Maybe a simple Helmotz equation with PBC using trigonometric coordinates? (this way we just minimize the residual and we do not care about boundaries). Let me know what you think!

@AleDinve
Copy link
Collaborator

AleDinve commented Apr 5, 2024

Hi! I'm planning to implement VPINNs in #263 by introducing a variational loss class. Could it be useful?

@dario-coscia
Copy link
Collaborator

Hi! I think @LoveFrootLoops already implemented a sort of Variational PINN using a variational Loss, maybe we can all work together to introduce Variational Losses in PINA?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request low priority Low priority fix tutorials Improvements or additions to tutorials
Projects
None yet
Development

No branches or pull requests

3 participants