In a different direction than certain communities that are wondering if outreach to applied communities is a good thing, Bernard Beauzamy, a mathematician by trade and owner of SCM, hosted a small workshop last week on the limits of modelisation ("Les limites de la modélisation" in French). The workshop featured a set of speakers who are specialists in their fields yet will present their domain expertise in light of how mathematical modeling help or did not help answer their complex issues. We are not talking about just some optimization function with several goals but rather some deeper questioning on how the modeling of reality and reality itself clash with each other. While the presentation were in French, some of the slides do not need much translation if you are coming from English. Here is the list of talks with a link to the presentations:
9 h – 10 h : Dr. Riadh Zorgati, EdF R&D : Le management de l'énergie ; tentatives de modélisation : succès et échecs.
11 h – 12 h : Dr. France Wallet, Evaluation des risques sanitaires et environnementaux, EdF, Service de Santé : Modélisation en santé-environnement : pièges et limites.
14 h – 15 h : M. Giovanni Bruna, Directeur adjoint, Direction de la Sûreté des Réacteurs, Institut de Radioprotection et de Sûreté Nucléaire : Simulation-expérimentation : qui a raison ? L’expérience du combustible MOX.
16 h – 17 h : M. Xavier Roederer, Inspecteur Mission Contrôle Audit Inspection, Agence Nationale de l'Habitat : Peut-on prévoir sans modéliser ?
I could attend only two of the talks: the first and the third. In the first talk, Riadh Zorgati talked about modeling as applied in the context electricity production. He did a great job of providing the different timescales and attendant need for algorithm simplification when it comes to planning/scheduling electricity production in France. Every power plant and hydraulic resources owned by EDF (the main utility in France) have different operating procedures and capabilities as respect to how they can produce power to the grid. Since electricity has to have a continuous equilibrium between production and consumption, an aspect of the scheduling involves computing the need of the country the day after given various input a day before. As it turns out the modeling could be very detailed, but it would lead to a prohibitive computational time to get an answer for the next day of planning (more than a day's worth). The modeling is simplified to a certain extent by resorting to greedy algorithms if I recall to enable quicker predictions. The presentation has much more in it but it was interesting to see that a set of good algorithms were clearly money makers for the utility.
The third presentation was by Giovanni Bruna who talked about the problematic of figuring out how to extract meaningful information out of a set of experiments and computations in the case of plutonium use in nuclear reactors.He spent the better half of the presentation going through a nuclear engineering 101 class that featured a good introduction on the subject of plutonium use in nuclear reactors. Plutonium is a by-product of the consumption of uranium in a nuclear reactor. In fact, after an 18 month cycle, more than 30 percent of the power of an original uranium rod is produced by the plutonium created in that time period. After some time in the core, the rod is retrieved so that it can be reprocessed yielding the issue of how plutonium can be reused in a material called MOX (at least in France, in the U.S. a policy of no reprocessing is the law of the land). It turns out that plutonium is different from uranium because of its high epithermal cross section yielding a harder spectrum than the one found with uranium. The conundrum faced by the safety folks resides in figuring out how the current measurements and attendant extrapolation to power levels can be done in confidence when replacing uranium by plutonium. The methods used with uranium have more than 40 years of history, with plutonium not so much. It turns out to be a difficult endeavor that can only be managed with a constant investigation between well done experiments and a revision of the calculation processes and a heavy use of margins. This example is also fascinating because this type of exercise reveals all the assumptions built in the computational chain starting from the cold subcritical assemblies Monte Carlo runs all the way to the expected power level found in actual nuclear reactor cores. It is a computational chain because the data from the experiment does not say anything directly about the actual variable of interest (here the power level). As opposed to Riadh's talk, the focus here is to make sure that the mathematical modeling is robust to changes in assumptions on the physics of the system.
Thanks Bernard for hosting the workshop, it was enlightening.
No comments:
Post a Comment