{{projectinfo|Application|Radar Cross-Section of Aircraft|Large struct article ScreenCapture3.png|In this article, we explore computing RCS of electrically large structures, like aircraft.|
*[[EM.Tempo]]
*Radar Cross Section
*Large Projects
== Introduction ==
In this article, we will compute the bistatic bi-static radar cross-section (RCS) of a Dassault Mirage III type fighter aircraft at 850 MHz with [[EM.Tempo]]. At 850 MHzThroughout the article, considerable computational resources are required to compute we will discuss a full-wave result few challenges involved in working with electrically-large models. {{Note| For an in-depth tutorial related to computing RCS in [[EM.Tempo]]'s FDTD solver, please review [[EM.Tempo Tutorial Lesson 2: Analyzing Scattering From A Sphere]]}}
== Computational Environment ==
The Mirage III has approximate dimensions (length,wingspan,height) of 15m x 8m x 4.5m. Or, measured in terms of freespace wavelength at 850 MHz, 42.5 lambda λ x 22.66 lambda λ x 12.75 lambdaλ. Thus, for the purposes of [[EM.Tempo]], we need to solve a region of about 12,279 cubic wavelengths. For problems of this magnitudesize, a great deal of CPU memory is needed, and a high-performance , multi-core CPU is desirable to reduce simulation time.
[https://aws.amazon.com/ Amazon Web Services ] allows one to acquire high-performance compute instances on demand, and pay on a per-use basis. To be able to log into an Amazon instance via Remote Desktop Protocol, the [[EM.Cube]] license must allow terminal services (for more information, see [[http://www.emagtech.com/content/emcube-2016-licensing-purchasing-options EM.CubePricing]] Pricing). For this project, we used a c4.4xlarge instance running Windows Server 2012. This instance has 16 virtual CPUs, and 30 GiB of memory, and 16 virtual CPU cores. The CPU for this instance is an Intel Xeon E5-2666 v3 (Haswell) processor.
== CAD Model ==
The CAD model used for this simulation was found on [https://grabcad.com/ GrabCAD], an online repository of user-contributed CAD files and models. [[EM.Cube]]'s IGES import was then used to import the model. Once we import the model, we move the Mirage to a new PEC material group in [[EM.Tempo]].
<div><ul>
<li style="display: inline-block;"> [[Image:glass.png |thumb|left|200px|Selecting glass as cockpit material for the Mirage model.]]</li>
<li style="display: inline-block;"> [[Image:Mirage image.png |thumb|left|200px|Complete model of Mirage aircraft.]]</li>
</ul></div>
For the present simulation, we model the entirety of the aircraft, except for the cockpit, as PEC. For the cockpit, we use [[EM.Cube]]'s material database to select a glass of our choosing.
Since [[EM.Tempo]]'s mesher is very robust with regard to small model inaccuracies or errors, we don't need to perform any additional healing or welding of the model.
<br clear="all" />
== Project Setup ==
[[Image:ff settings.png|thumb|left|150px|Adding an RCS observable for the Mirage project]] == Results =Observables===First, we create an RCS observable with one degree increments in both phi and theta directions. Although increasing the angular resolution of our farfield will significantly increase simulation time, The RCS of electrically large structures tend to have very narrow peaks and nulls, so the resolution is required. We also create two field sensors -- one with a z-normal underneath the aircraft, and another with an x-normal along the length of the aircraft. The nearfields are not the prime observable for this project, but they may add insight into the simulation, and do not add much overhead to the simulation.
===Planewave Source===
Since we're computing a Radar Cross Section, we also need to add a planewave source. For this example, we will specify a TMz planewave with θ = 135 degrees, φ = 0 degrees, or:
:<math> \hat{k} = \frac{\sqrt{2}}{2} \hat{x} - \frac{\sqrt{2}}{2} \hat{z} </math>
===Mesh Settings===
For the mesh, we use the "Fast Run/Low Memory Settings" preset. This will set the minimum mesh rate at 15 cells per λ, and permits grid adaptation only where necessary. This preset provides slightly less accuracy than the "High Precision Mesh Settings" preset, but results in smaller meshes, and therefore shorter run times.
At 850 MHz, the resulting FDTD mesh is about 270 million cells. With mesh-mode on in [[EM.Cube]], we can visually inspect the mesh.
<div><ul>
<li style="display: inline-block;">
[[Image:Large struct article mesh settings.png |thumb|left|300px|Mesh settings used for the Mirage project.]]</li>
<li style="display: inline-block;"> [[Image:Large struct article mesh detail.png|thumb|left|300px|Mesh detail near the cockpit region of the aircraft.]] </li>
</ul></div>
[[Image:ff settings.png|thumb|left|300px|Figure 1: Geometry of the periodic unit cell of the dispersive water slab in EM.Tempo.]]===Engine Settings===
For the engine settings, we use the default settings, except for "Thread Factor". The "Thread Factor" setting essentially tells the FDTD engine how many CPU threads to use during the time-marching loop.
[[Image:Engine settings.png|thumb|left|300px|Engine settings used for Mirage project.]]
For a given system, some experimentation may be needed to determine the best number of threads to use. In many cases, using half of the available hardware concurrency works well. This comes as a result of there often being two cores per memory port on many modern processors. In other words, for many problems, the FDTD solver cannot load and store data from CPU memory quickly enough to use all available threads or hardware concurrency. The extra threads are idling waiting for data, and a performance hit is incurred due to increased thread context switching.
[[Image:Large struct article ScreenCapture3EM.png|thumb|left|500px|Figure 1: Geometry Cube]] will attempt use a version of the periodic unit cell FDTD engine optimized for use with Intel's AVX instruction set, which provides a significant performance boost. If AVX is unavailable, a less optimal version of the dispersive water slab in EMengine will be used.Tempo.]]
After the sources, observables, and mesh are set up, the simulation is ready to be run.
<br clear="all" />
[[Image:Large struct article ScreenCapture2.png|thumb|left|500px|Figure 1: Geometry of the periodic unit cell of the dispersive water slab in EM.Tempo.]]== Simulation ==
The complete simulation, including meshing, time-stepping, and farfield calculation took 5 hours, 50 minutes on the above-mentioned Amazon instance. The average performance of the timeloop was about 330 MCells/s. The farfield computation requires a significant portion of the total simulation time. The farfield computation could have been reduced with larger theta and phi increments, but, as mentioned previously, for electrically large structures, resolutions of 1 degree or less are required.
[[Image:Large struct article ScreenCapture1.png|thumb|left|500px|Figure 1: Geometry of After the periodic unit cell of simulation is complete, we can see the dispersive water slab in EMRCS pattern as shown below.Tempo We can also plot 2D cartesian and polar cuts from the Data Manager.]]
<div><ul>
<li style="display: inline-block;">
[[Image:Large struct article ScreenCapture3.png|thumb|left|300px|RCS pattern of the Mirage model at 850 MHz in dBsm.]]
</li>
</ul></div>
<div><ul> <li style="display: inline-block;"> [[Image:RCS XY.png|thumb|left|300px|XY cut of RCS]]</li><li style="display: inline-block;"> [[Image:RCS ZX.png|thumb|left|300px|ZX cut of RCS]]</li><li style="display: inline-block;"> [[Image:Large struct article RCS YZ.png|thumb|left|500px300px|Figure 1: Geometry of the periodic unit cell YZ cut of the dispersive water slab in EM.Tempo.RCS]]</li></ul></div>
[[ImageThe nearfield visualizations are also available as seen below:Large struct article mesh detail.png|thumb|left|500px|Figure 1: Geometry of the periodic unit cell of the dispersive water slab in EM.Tempo.]]
<div><ul> <li style="display: inline-block;"> [[Image:Large struct article mesh settingsScreenCapture1.png |thumb|left|500px|Figure 1300px]]</li><li style="display: Geometry of the periodic unit cell of the dispersive water slab in EM.Tempoinline-block;"> [[Image:Large struct article ScreenCapture2.png|thumb|left|300px]]</li></ul></div>
<div><ul> <li style="display: inline-block;"> [[Image:glassRCS XY Polar.png ||thumb|left|500px300px|Figure 1XY Cut of RCS is dBsm]]</li><li style="display: Geometry of the periodic unit cell inline-block;"> [[Image:RCS ZX Polar.png||thumb|left|300px| ZX Cut of the dispersive water slab in EM.TempoRCS is dBsm]] </li><li style="display: inline-block;"> [[Image:RCS YZ Polar.png||thumb|left|300px| YZ Cut of RCS is dBsm]]</li></ul></div>