*Large Projects
*Cloud-Based Resources
Â
|All versions| None }}
In this article, we will compute the bi-static radar cross-section (RCS) of a Dassault Mirage III type fighter aircraft at 850 MHz with [[EM.Tempo]]. Throughout the article, we will discuss a few challenges involved in working with electrically-large models.
Â
{{Note| For an in-depth tutorial related to computing RCS in [[EM.Tempo]], please review [[EM.Tempo Tutorial Lesson 2: Analyzing Scattering From A Sphere]]}}
== Computational Environment ==
The Mirage III has approximate dimensions (length,wingspan,height) of 15m x 8m x 4.5m. Or, measured in terms of freespace wavelength at 850 MHz, 42.5 lambda λ x 22.66 lambda λ x 12.75 lambdaλ. Thus, for the purposes of [[EM.Tempo]], we need to solve a region of about 12,279 cubic wavelengths. For problems of this size, a great deal of CPU memory is needed, and a high-performance, multi-core CPU is desirable to reduce simulation time.
[https://aws.amazon.com/ Amazon Web Services ] allows one to acquire high-performance compute instances on demand, and pay on a per-use basis. To be able to log into an Amazon instance via Remote Desktop Protocol, the [[EM.Cube]] license must allow terminal services (for more information, see [[http://www.emagtech.com/content/emcube-2016-licensing-purchasing-options EM.Cube]] Pricing]). For this project, we used a c4.4xlarge instance running Windows Server 2012. This instance has 30 GiB of memory, and 16 virtual CPU cores. The CPU for this instance is an Intel Xeon E5-2666 v3 (Haswell) processor.
== CAD Model ==
The CAD model used for this simulation was found on [https://grabcad.com/ GrabCAD], an online repository of user-contributed CAD files and models. [[EM.Cube]]'s IGES import was then used to import the model. Once we import the model, we move the Mirage to a new PEC material group in [[EM.Tempo]].
<div><ul> <li style="display: inline-block;"> [[Image:glass.png ââ|thumb|left|200px|Selecting glass as cockpit material for the Mirage model.]]</li><li style="display: inline-block;"> [[Image:Mirage image.png ââ|thumb|left|200px|Complete model of Mirage aircraft.]]</li></ul></div>
For the present simulation, we model the entirety of the aircraft, except for the cockpit, as PEC. For the cockpit, we use [[EM.Cube]]'s material database to select a glass of our choosing.
[[Image:ff settings.png|thumb|left|150px|Adding an RCS observable for the Mirage project]]
===Observables===
First, we create an RCS observable with one degree increments in both phi and theta directions. Although increasing the angular resolution of our farfield will significantly increase simulation time, The RCS of electrically large structures tend to have very narrow peaks and nulls, so the resolution is required.
We also create two field sensors -- one with a z-normal underneath the aircraft, and another with an x-normal along the length of the aircraft. The nearfields are not the prime observable for this project, but they may add insight into the simulation, and do not add much overhead to the simulation.
===Planewave Source===Since we're computing a Radar Cross Section, we also need to add a planewave source. For this example, we will specify a TMz planewave with k = sqrt(2)/2 x - sqrt(2)/2 z, or theta θ = 135 degrees, phi φ = 0 degrees. , or:
[[Image:Large struct article mesh settings.png ââ|thumb|left|200px|Mesh settings used for the Mirage project.]]<math> \hat{k} = \frac{\sqrt{2}}{2} \hat{x} - \frac{\sqrt{2}}{2} \hat{z} </math>
[[Image:Large struct article ===Mesh Settings===For the mesh detail, we use the "Fast Run/Low Memory Settings" preset.pngâ|thumb|left|200px|Mesh detail near This will set the cockpit region of minimum mesh rate at 15 cells per λ, and permits grid adaptation only where necessary. This preset provides slightly less accuracy than the aircraft"High Precision Mesh Settings" preset, but results in smaller meshes, and therefore shorter run times.]]
For the meshAt 850 MHz, we use the "Fast Run/Low Memory Settings" preset. This will set the minimum resulting FDTD mesh rate at 15 is about 270 million cells per lambda, and permits grid adaptation only where necessary. This preset provides slightly less accuracy than the "High Precision Mesh Settings" preset, but results With mesh-mode on in smaller meshes[[EM.Cube]], and therefore shorter run timeswe can visually inspect the mesh.
At 850 MHz, the resulting FDTD mesh is about 270 million cells. With mesh<div><ul> <li style="display: inline-mode on in block;"> [[EMImage:Large struct article mesh settings.png ââ|thumb|left|300px|Mesh settings used for the Mirage project.Cube]], we can visually inspect the </li><li style="display: inline-block;"> [[Image:Large struct article meshdetail. pngâ|thumb|left|300px|Mesh detail near the cockpit region of the aircraft.]] </li></ul></div>
<br clear="all" />==Engine Settings===
== Simulation ==For the engine settings, we use the default settings, except for "Thread Factor". The "Thread Factor" setting essentially tells the FDTD engine how many CPU threads to use during the time-marching loop.
The complete simulation, including meshing, time-stepping[[Image:Engine settings.png|thumb|left|300px|Engine settings used for Mirage project.]]For a given system, and farfield calculation took 5 hours, 50 minutessome experimentation may be needed to determine the best number of threads to use. The average performance In many cases, using half of the timeloop was 330 MCells/savailable hardware concurrency works well. The farfield computation requires This comes as a significant portion result of the total simulation timethere often being two cores per memory port on many modern processors. The farfield computation could have been reduced with larger theta and phi increments, but, typicallyIn other words, for electrically large structuresmany problems, resolutions of 1 degree the FDTD solver cannot load and store data from CPU memory quickly enough to use all available threads or less hardware concurrency. The extra threads are requiredidling waiting for data, and a performance hit is incurred due to increased thread context switching.
After [[EM.Cube]] will attempt use a version of the simulation FDTD engine optimized for use with Intel's AVX instruction set, which provides a significant performance boost. If AVX is completeunavailable, a less optimal version of the nearfield visualization are available as seen below:engine will be used.
After the sources, observables, and mesh are set up, the simulation is ready to be run.
<br clear="all" />
Â
== Simulation ==
[[Image:Large struct article ScreenCapture3The complete simulation, including meshing, time-stepping, and farfield calculation took 5 hours, 50 minutes on the above-mentioned Amazon instance.png|thumb|left|500px|Figure 1: Geometry The average performance of the periodic unit cell timeloop was about 330 MCells/s. The farfield computation requires a significant portion of the dispersive water slab in EMtotal simulation time.Tempo The farfield computation could have been reduced with larger theta and phi increments, but, as mentioned previously, for electrically large structures, resolutions of 1 degree or less are required.]]
After the simulation is complete, we can see the RCS pattern as shown below. We can also plot 2D cartesian and polar cuts from the Data Manager.
<div><ul> <li style="display: inline-block;"> [[Image:Large struct article ScreenCapture2ScreenCapture3.png|thumb|left|500px300px|Figure 1: Geometry RCS pattern of the periodic unit cell of the dispersive water slab Mirage model at 850 MHz in EM.TempodBsm.]] </li></ul></div>
<div><ul>
<li style="display: inline-block;"> [[Image:RCS XY.pngâ|thumb|left|300px|XY cut of RCS]]</li>
<li style="display: inline-block;"> [[Image:RCS ZX.pngâ|thumb|left|300px|ZX cut of RCS]]</li>
<li style="display: inline-block;"> [[Image:Large struct article RCS YZ.pngâ|thumb|left|300px|YZ cut of RCS]]</li>
</ul></div>
[[ImageThe nearfield visualizations are also available as seen below:Large struct article ScreenCapture1.png|thumb|left|500px|Figure 1: Geometry of the periodic unit cell of the dispersive water slab in EM.Tempo.]]
<div><ul>
<li style="display: inline-block;"> [[Image:Large struct article ScreenCapture1.png|thumb|left|300px]]</li>
<li style="display: inline-block;"> [[Image:Large struct article ScreenCapture2.png|thumb|left|300px]]</li>
</ul></div>
<div><ul> <li style="display: inline-block;"> [[Image:RCS XY Polar.png||thumb|left|300px| XY Cut of RCS is dBsm]]</li><li style="display: inline-block;"> [[Image:RCS ZX Polar.png||thumb|left|300px| ZX Cut of RCS is dBsm]] </li><li style="display: inline-block;"> [[Image:Large struct article RCS YZPolar.pngâpng||thumb|left|200px300px|YZ cut Cut of RCSis dBsm]]</li></ul></div>