Changes

/* Examining the Simulation Results */
{{projectinfo|Application|Modeling Radar Signature of Of Real-Sized Aircraft Using EM.Tempo|ART AIR title.png|In this article, we explore computing RCS the radar cross section of electrically large structures, like real-sized aircraft.|*[[Building Geometrical Constructions in CubeCAD | CubeCAD]]
*[[EM.Tempo]]
*Radar Cross Section
*Large ProjectsHigh Performance Computing*CAD Model Import
*Plane Wave Source
*Cloud-Based Resources
== Computational Environment ==
The Mirage III CAD model has an approximate length of 15m, a wingspan of 8m, and an approximate height of 4.5m. Expressed in free-space wavelengths at 850 MHz, the approximate dimensios dimensions of the aircraf aircraft model are 42.5 &#955;<sub>0</sub> x 22.66 &#955;<sub>0</sub> x 12.75 &#955;<sub>0</sub>. Thus, for the purposes of [[EM.Tempo]], we need to solve a region of about 12,279 cubic wavelengths. For problems of this size, a very large CPU memory is needed, and a high-performance, multi-core CPU is desirable to reduce the simulation time.
[https://aws.amazon.com/ Amazon Web Services] allows one to acquire high-performance compute instances on demand, and pay on a per-use basis. To be able to log into an Amazon instance via Remote Desktop Protocol (RDP), the [[EM.Cube]] license must allow terminal services. For the purpose of this project, we used a c4.4xlarge instance running Windows Server 2012. This instance has 30 GB of RAM memory, and 16 virtual CPU cores. The CPU for this instance is an Intel Xeon E5-2666 v3 (Haswell) processor.
</ul></div>
For the present simulation, we model the entirety of the aircraft, except for the cockpit, as PEC. For the cockpit, we use [[EM.Cube]]'s material database to select one of the several glass of types with &epsilon;<sub>r</sub> = 6.3 and &sigma; = 0.017 S/m.
<div><ul>
</ul></div>
Since Imported large CAD models oftentimes require additional healing or welding of the geometric structure. For example, there might be small cracks or gaps between different surfaces. [[EM.Tempo]]'s mesher mesh generator is very robust with regard to small model inaccuracies or errors, we don't need to perform any additional healing or welding and can resolve and cure most of the modelinvisible or hardly visible structural defects using a control parameter called the absolute minimum grid spacing.
<br clearSince we are computing the radar cross section of a target, we need to introduce a plane wave source. For this example, we will specify an obliquely incident TMz plane wave source with &#952; ="all" />135&deg;, &#966; = 0&deg;:
[[Image:ff settings.png|thumb|left|150px|Adding an RCS observable for the Mirage project]]<math> \hat{k} = \frac{\sqrt{2}}{2} \hat{x} - \frac{\sqrt{2}}{2} \hat{z} </math>
===Observables===First, we create We also introduce an RCS observable with one degree increments in a very fine angular resolution along both phi the elevation and azimuth directions: &Delta;&theta directions; = &Delta;&phi; = 1&deg;. Although increasing the angular resolution of our farfield the far fields will significantly increase the simulation time, The it is certainly needed for this project as the RCS of electrically large structures tend to have very narrow peaks and nulls, so the resolution is required.
We also create define two field sensors -- : one with a z-normal horizontal plane underneath the aircraft, and another with an x-normal a vertical plane along the length of the aircraftpassing through its center line. The nearfields near fields are not the prime observable for this project, but they may add useful insight into the simulation, and do not add without adding too much overhead to the simulation.
===Planewave Source===Since we're computing a Radar Cross Section, we also need to add a planewave source. For this example, we will specify a TMz planewave with Mesh Generation &#952; Setting the FDTD Solver Parameters = 135 degrees, &#966; = 0 degrees, or:
:To generate the FDTD Yee mesh of this structure, we use the "Fast Run/Low Memory Settings" preset. This will set the minimum mesh density at 15 cells per &#955;<mathsub>eff</sub>, and permits grid adaptation only where necessary. This preset provides slightly less accuracy than the "High Precision Mesh Settings" preset, but it results in a smaller mesh size, and therefore a shorter run time. At 850 MHz, the resulting FDTD mesh contains about <b><u>270 million</u> \hat{k} = \frac{\sqrt{2}}{2} \hat{x} - \frac{\sqrt{2}}{2} \hat{z} </mathb>cells.
<div><ul> <li style== Mesh Generation & "display: inline-block;"> [[Image:Large struct article mesh detail.png‎|thumb|left|720px|The details of the FDTD Simulation ==mesh near the cockpit region of the aircraft.]] </li></ul></div>
For the meshthis simulation, we use most of the default simulation engine settings except for "Fast Run/Low Memory SettingsThread Factor" preset. The thread factor setting essentially tells the FDTD engine how many CPU threads to use during [[EM.Tempo]]'s time-marching loop. For a given system, some experimentation may be needed to determine the best number of threads to use. In many cases, using half of the available hardware concurrency works well. This will set comes from the minimum mesh rate at 15 cells fact that many modern processors often have two cores per &#955;memory port. In other words, for many problems, the FDTD solver cannot load and permits grid adaptation only where necessarystore data from CPU memory quickly enough to use all the available threads or hardware concurrency. This preset provides slightly less accuracy than The extra threads remain idle waiting for the "High Precision Mesh Settings" preset, but results in smaller meshesdata, and therefore shorter run timesa performance hit is incurred due to the increased thread context switching.[[EM.Cube]] will attempt use a version of the FDTD engine optimized for use with Intel's AVX instruction set, which provides a significant performance boost. If AVX is unavailable, a less optimal version of the engine will be used alternatively.
At 850 MHz, After the resulting FDTD sources, observables, and mesh are set up, the simulation is about 270 million cellsready to be run. With The complete simulation, including meshgeneration, time-mode stepping, and far field calculations took 350 minutes on in [[EMthe above-mentioned Amazon instance.Cube]], we can visually inspect The far field computation requires a significant portion of the meshtotal simulation time.
<div><ul> <li style="display: inline-block;"> [[Image:Large struct article mesh settings.png ‎‎|thumb|left|300px|Mesh settings used for the Mirage project.]]</li><li style="display: inline-block;"> [[Image:Large struct article mesh detail.png‎|thumb|left|300px|Mesh detail near Examining the cockpit region of the aircraft.]] </li></ul></div>Simulation Results ==
For After the engine settingssimulation is complete, we use the default settings, except for "Thread Factor"3D simulation data associated with the project observables can be visualized from [[EM.Tempo]]'s navigation tree. The "Thread Factor" setting essentially tells near-field distribution maps are shown in the FDTD engine how many CPU threads to use during figures below. The standing wave field patterns are visibly seen around the time-marching loopaircraft.
<table><tr><td>[[Image:Engine settingsLarge struct article ScreenCapture1.png|thumb|left|300px500px|Engine settings used for Mirage projectElectric field distribution in the horizontal sensor plane underneath the aircraft.]]For a given system, some experimentation may be needed to determine the best number of threads to use</td></tr><tr><td>[[Image:Large struct article ScreenCapture2. In many cases, using half of png|thumb|left|500px|Electric field distribution in the available hardware concurrency works well. This comes as a result vertical sensor plane passing through the center line of there often being two cores per memory port on many modern processors. In other words, for many problems, the FDTD solver cannot load and store data from CPU memory quickly enough to use all available threads or hardware concurrency. The extra threads are idling waiting for data, and a performance hit is incurred due to increased thread context switchingaircraft.]]</td></tr></table>
[[EM.Cube]] will attempt use a version of The figure below shows the FDTD engine optimized for use with Intel's AVX instruction set, which provides a significant performance boost. If AVX is unavailable, a less optimal version total 3D bistatic RCS pattern of the engine will be used. aircraft:
After <table><tr><td>[[Image:Large struct article ScreenCapture3.png|thumb|left|500px|The 3D total RCS pattern of the sources, observables, and mesh are set up, the simulation Mirage model at 850 MHz in dBsm. The aircraft structure is ready to be runshown in the freeze state.]]</td></tr><br clear="all" /table>
The complete simulation, including meshing, time-stepping, and farfield calculation took 5 hours, 50 minutes on figures below show the above-mentioned Amazon instance. The average performance Cartesian graphs of the timeloop was about 330 MCells/s. The farfield computation requires a significant portion bistatic RCS pattern of the total simulation time. The farfield computation could have been reduced with larger theta and phi increments, but, as mentioned previously, for electrically large structures, resolutions of 1 degree or less are required.aircraft in the three principal coordinate planes:
== Simulation Results ==<table><tr><td>[[Image:RCS XY.png‎|thumb|left|500px|The Cartesian graph of the XY-cut RCS of the aircraft in m<sup>2</sup>.]]</td></tr><tr><td>[[Image:Large struct article RCS YZ.png‎|thumb|left|500px|The Cartesian graph of the YZ-cut RCS of the aircraft in m<sup>2</sup>.]]</td></tr><tr><td>[[Image:RCS ZX.png‎|thumb|left|500px|The Cartesian graph of the ZX-cut RCS of the aircraft in m<sup>2</sup>.]]</td></tr></table>
After The figures below show the simulation is complete, we can see polar graphs of the bistatic RCS pattern as shown below. We can also plot 2D cartesian and polar cuts from of the aircraft in the Data Manager.three principal coordinate planes:
<divtable><ultr> <li style="displaytd>[[Image: inlineRCS XY Polar.png|thumb|left|500px|The polar graph of the XY-block;"cut RCS of the aircraft in dBsm.]]</td></tr><tr><td> [[Image:Large struct article ScreenCapture3RCS YZ Polar.png|thumb|left|300px500px|The polar graph of the YZ-cut RCS pattern of the Mirage model at 850 MHz aircraft in dBsm.]] </litd></ultr><tr><td>[[Image:RCS ZX Polar.png|thumb|left|500px|The polar graph of the ZX-cut RCS of the aircraft in dBsm.]]</td></tr></divtable>
<div><ul> <li style="display: inline-block;"> [[Image:RCS XY.png‎|thumb|left|300px|XY cut of RCS]]<br /li><li style="display: inline-block;"> [[Image:RCS ZX.png‎|thumb|left|300px|ZX cut of RCS]]</li><li style="display: inline-block;"> [[Image:Large struct article RCS YZ.png‎|thumb|left|300px|YZ cut of RCS]]</li></ul></div>
The nearfield visualizations are also available as seen below:<hr>
<div><ul> <li style="display: inline-block;"> [[Image:Large struct article ScreenCapture1Top_icon.png|thumb|left30px]] '''[[#Introduction |600pxBack to the Top of the Page]]</li></ul></div>'''
<div><ul> <li style="display: inline-block;"> [[Image:Large struct article ScreenCapture2Back_icon.png|thumb|left30px]] '''[[EM.Cube#EM.Cube Articles & Notes |600pxCheck out more Articles & Notes]]</li></ul></div>'''
 <div><ul> <li style="display: inline-block;"> [[Image:RCS XY PolarBack_icon.png||thumb|left|300px| XY Cut of RCS is dBsm30px]]</li><li style="display: inline-block;"> '''[[Image:RCS ZX PolarEM.pngCube ||thumb|left|300px| ZX Cut of RCS is dBsm]] </li><li style="display: inline-block;"> [[Image:RCS YZ PolarBack to EM.png||thumb|left|300px| YZ Cut of RCS is dBsmCube Main Page]] </li></ul></div>'''
28,333
edits