Changes

/* Examining the Simulation Results */
{{projectinfo|Application|Modeling Radar Signature of Of Real-Sized Aircraft Using EM.Tempo|ART AIR title.png|In this article, we explore computing RCS the radar cross section of electrically large structures, like real-sized aircraft.|*[[Building Geometrical Constructions in CubeCAD | CubeCAD]]
*[[EM.Tempo]]
*Radar Cross Section
*Large ProjectsHigh Performance Computing*CAD Model Import
*Plane Wave Source
*Cloud-Based Resources
== Computational Environment ==
The Mirage III CAD model has an approximate length of 15m, a wingspan of 8m, and an approximate height of 4.5m. Expressed in free-space wavelengths at 850 MHz, the approximate dimensios dimensions of the aircraf aircraft model are 42.5 &#955;<sub>0</sub> x 22.66 &#955;<sub>0</sub> x 12.75 &#955;<sub>0</sub>. Thus, for the purposes of [[EM.Tempo]], we need to solve a region of about 12,279 cubic wavelengths. For problems of this size, a very large CPU memory is needed, and a high-performance, multi-core CPU is desirable to reduce the simulation time.
[https://aws.amazon.com/ Amazon Web Services] allows one to acquire high-performance compute instances on demand, and pay on a per-use basis. To be able to log into an Amazon instance via Remote Desktop Protocol (RDP), the [[EM.Cube]] license must allow terminal services. For the purpose of this project, we used a c4.4xlarge instance running Windows Server 2012. This instance has 30 GB of RAM memory, and 16 virtual CPU cores. The CPU for this instance is an Intel Xeon E5-2666 v3 (Haswell) processor.
We also define two field sensors: one with a horizontal plane underneath the aircraft, and another with a vertical plane along the length of the aircraft passing through its center line. The near fields are not the prime observable for this project, but they may add useful insight into the simulation without adding too much overhead to the simulation.
== Mesh Generation & Setting the FDTD Simulation Solver Parameters ==
For To generate the FDTD Yee meshof this structure, we use the "Fast Run/Low Memory Settings" preset. This will set the minimum mesh rate density at 15 cells per &#955;<sub>eff</sub>, and permits grid adaptation only where necessary. This preset provides slightly less accuracy than the "High Precision Mesh Settings" preset, but it results in a smaller meshesmesh size, and therefore a shorter run timestime. At 850 MHz, the resulting FDTD mesh is contains about <b><u>270 million </u></b> cells. With mesh-mode on in [[EM.Cube]], we can visually inspect the mesh.
<div><ul>
<li style="display: inline-block;"> [[Image:Large struct article mesh detail.png‎|thumb|left|720px|Mesh detail The details of the FDTD mesh near the cockpit region of the aircraft.]] </li>
</ul></div>
For the engine settingsthis simulation, we use most of the default simulation engine settings, except for "Thread Factor". The "Thread Factor" thread factor setting essentially tells the FDTD engine how many CPU threads to use during the [[EM.Tempo]]'s time-marching loop.For a given system, some experimentation may be needed to determine the best number of threads to use. In many cases, using half of the available hardware concurrency works well. This comes from the fact that many modern processors often have two cores per memory port. In other words, for many problems, the FDTD solver cannot load and store data from CPU memory quickly enough to use all the available threads or hardware concurrency. The extra threads remain idle waiting for the data, and a performance hit is incurred due to the increased thread context switching. [[EM.Cube]] will attempt use a version of the FDTD engine optimized for use with Intel's AVX instruction set, which provides a significant performance boost. If AVX is unavailable, a less optimal version of the engine will be used alternatively.
[[Image:Engine settings.png|thumb|left|300px|Engine settings used for Mirage project.]]For a given systemAfter the sources, observables, and mesh are set up, some experimentation may be needed to determine the best number of threads simulation is ready to usebe run. In many casesThe complete simulation, using half of the available hardware concurrency works well. This comes as a result of there often being two cores per memory port on many modern processors. In other wordsincluding mesh generation, for many problemstime-stepping, and far field calculations took 350 minutes on the FDTD solver cannot load and store data from CPU memory quickly enough to use all available threads or hardware concurrencyabove-mentioned Amazon instance. The extra threads are idling waiting for data, and far field computation requires a performance hit is incurred due to increased thread context switchingsignificant portion of the total simulation time.
[[EM.Cube]] will attempt use a version of == Examining the FDTD engine optimized for use with Intel's AVX instruction set, which provides a significant performance boost. If AVX is unavailable, a less optimal version of the engine will be used. Simulation Results ==
After the sources, observables, and mesh are set upsimulation is complete, the 3D simulation is ready to data associated with the project observables can be runvisualized from [[EM.Tempo]]'s navigation tree. The near-field distribution maps are shown in the figures below. The standing wave field patterns are visibly seen around the aircraft.<br clear="all" />
The complete simulation, including meshing, time-stepping, and farfield calculation took 5 hours, 50 minutes on the above-mentioned Amazon instance<table><tr><td>[[Image:Large struct article ScreenCapture1. The average performance of png|thumb|left|500px|Electric field distribution in the timeloop was about 330 MCellshorizontal sensor plane underneath the aircraft.]]</std></tr><tr><td>[[Image:Large struct article ScreenCapture2. The farfield computation requires a significant portion png|thumb|left|500px|Electric field distribution in the vertical sensor plane passing through the center line of the total simulation time. The farfield computation could have been reduced with larger theta and phi increments, but, as mentioned previously, for electrically large structures, resolutions of 1 degree or less are requiredaircraft.]]</td></tr></table>
== Simulation Results ==The figure below shows the total 3D bistatic RCS pattern of the aircraft:
After the simulation is complete, we can see the <table><tr><td>[[Image:Large struct article ScreenCapture3.png|thumb|left|500px|The 3D total RCS pattern as shown belowof the Mirage model at 850 MHz in dBsm. We can also plot 2D cartesian and polar cuts from The aircraft structure is shown in the Data Managerfreeze state.]]</td></tr></table>
<div><ul> <li style="display: inline-block;"> [[Image:Large struct article ScreenCapture3.png|thumb|left|300px|The figures below show the Cartesian graphs of the bistatic RCS pattern of the Mirage model at 850 MHz aircraft in dBsm.]] </li></ul></div>the three principal coordinate planes:
<divtable><ultr> <li style="display: inline-block;"td> [[Image:RCS XY.png‎|thumb|left|300px500px|The Cartesian graph of the XY -cut of RCSof the aircraft in m<sup>2</sup>.]]</litd><li style="display: inline-block;"/tr> <tr><td>[[Image:Large struct article RCS ZXYZ.png‎|thumb|left|300px500px|ZX cut The Cartesian graph of the YZ-cut RCSof the aircraft in m<sup>2</sup>.]]</litd><li style="display: inline-block;"/tr> <tr><td>[[Image:Large struct article RCS YZZX.png‎|thumb|left|300px500px|YZ cut The Cartesian graph of the ZX-cut RCSof the aircraft in m<sup>2</sup>.]]</litd></ultr></divtable>
The nearfield visualizations are also available as seen figures belowshow the polar graphs of the bistatic RCS pattern of the aircraft in the three principal coordinate planes:
<divtable><ultr> <li style="displaytd>[[Image: inlineRCS XY Polar.png|thumb|left|500px|The polar graph of the XY-block;"cut RCS of the aircraft in dBsm.]]</td> </tr><tr><td>[[Image:Large struct article ScreenCapture1RCS YZ Polar.png|thumb|left|600px500px|The polar graph of the YZ-cut RCS of the aircraft in dBsm.]]</litd></ultr><tr><td>[[Image:RCS ZX Polar.png|thumb|left|500px|The polar graph of the ZX-cut RCS of the aircraft in dBsm.]]</td></tr></divtable>
<div><ul> <li style="display: inline-block;"> [[Image:Large struct article ScreenCapture2.png|thumb|left|600px]]<br /li></ul></div>
<hr>
<div><ul> <li style="display: inline-block;"> [[Image:RCS XY PolarTop_icon.png|30px]] '''[[#Introduction |thumb|left|300px| XY Cut Back to the Top of RCS is dBsmthe Page]]</li>''' <li style="display: inline-block;"> [[Image:RCS ZX PolarBack_icon.png|30px]] '''[[EM.Cube#EM.Cube Articles & Notes |thumb|left|300px| ZX Cut of RCS is dBsmCheck out more Articles & Notes]] </li>''' <li style="display: inline-block;"> [[Image:RCS YZ PolarBack_icon.png|30px]] '''[[EM.Cube |thumb|left|300px| YZ Cut of RCS is dBsmBack to EM.Cube Main Page]] </li></ul></div>'''
28,333
edits