Horizon: The Simulation Framework Overview
description
Transcript of Horizon: The Simulation Framework Overview
Horizon: The Simulation Framework Overview
Prof. Eric A. Mehiel
and the Horizon Team
The System Simulation Problem
• There are several existing ways to approach the problem of system level requirements verification via system simulation– MDD/MDO: Varying system design parameters to
reach a satisfactory (or optimal) design point– Process Integration for Design Exploration: Products
like Model Center network the various custom design data sources together with output visualization
– Visualization Simulation: STK, FreeFlyer and SOAP are excellent for visualizing the behavior of systems and determining geometric access to targets
What is the Horizon Simulation Framework?
• The Framework is a library of integrated software tools supporting rapid system-of-systems modeling, simulation and utility analysis
• The Framework provides an extensible and well-defined modeling capability concurrent with a scheduling engine to generate operational schedules of Systems-of-Systems with corresponding state data
• As a compliment to the Multi-Disciplinary Optimization (MDO) approach, the Framework answers the following question: Does the current design meet system level requirements that are based on a Use Case and cannot be verified by direct analysis?
Why is Horizon Useful?• Fills the niche between generalized integration tools and
specialized geometric-access and visualization tools• Can implement subsystem models, CONOPS
requirements and Use Case scenarios while producing valid simulation output data
• All subsystem and asset modules created within Horizon are modular
• Allows system modeling at any level of fidelity to support the design process from Conceptual Design through CDR
• Helps find any design bottleneck or leverage points hidden within the system design
The Horizon Team• California Polytechnic State University, San Luis Obispo
(Cal Poly)– Prof. Eric A. Mehiel – Aerospace Engineering Department
• Current Student– Derek Seibel– Brian Butler– Seth Silva
• Past Students– Cory O’Connor– Daniel Treachler– Travis Lockyer
– Prof. John Clements – Computer Science Department• Einar Phersen
• Cutting Edge Communications, LLC– Dave Burnett– Derek Wilis
The Horizon Design Philosophy
• Simply put, Horizon was designed to be useful and reusable
• In the software architecture design interfaces are key!
• Three guiding principles– Modularity– Flexibility– Utility
Scheduler/System Interface
Main Scheduling Algorithm
System Parameters (Input)
Simulation Parameters (Input)
Final Schedule, State Data (Output)
System Model
Subsystem
Subsystem
Subsystem
Horizon Simulation Framework
Interface between Subsystems
The Horizon Design Philosophy: Modularity
• Modularity increases simulation component value and simplifies extension
• Two degrees of Horizon modularity:– Modularity between the
scheduler and the system model
– Modularity between subsystems inside the system model
Scheduler/System Interface
Main Scheduling Algorithm
System Parameters (Input)
Simulation Parameters (Input)
Final Schedule, State Data (Output)
System Model
Subsystem
Subsystem
Subsystem
Horizon Simulation Framework
Interface between Subsystems
A 1C 2 3B
The Horizon Design Philosophy: Flexibility
• Enables comprehensive modeling and simulation capability
• Two main degrees of flexibility:– Flexibility of fidelity
• Capable of simulating systems as simple or complex as user desires
– Flexibility of system• Capable of simulating any
system (satellites, aircraft, ground vehicles, troops, etc..)
• No preset vehicle or subsystem “types”
Scheduler/System Interface
Main Scheduling Algorithm
System Parameters (Input)
Simulation Parameters (Input)
Final Schedule, State Data (Output)
System Model
Subsystem
Subsystem
Subsystem
Horizon Simulation Framework
Interface between Subsystems
Subsystem
Subsystem Subsystem
SubsystemSubsystem Subsystem
Subsystem
Subsystem Subsystem
SubsystemSubsystem
The Horizon Design Philosophy: Utility
Scheduler/System Interface
Main Scheduling Algorithm
System Parameters (Input)
Simulation Parameters (Input)
Final Schedule, State Data (Output)
System Model
Subsystem
Subsystem
Subsystem
Horizon Simulation Framework
Interface between Subsystems
dtF svd() eig() Matrix Quaternion
• Utility Libraries promote rapid system modeling
• Current Utilities Include:– Matrix class– Quaternion class– Coordinate rotations and
transformations– Single Value Decomposition– Eigenvalue/Eigenvector
algorithms– Runge-Kutta45 integrator
The Horizon Software Architecture Version 1.2
Architecture: The Fundamental Scheduling Elements
• Four fundamental scheduling elements
– Task – The “objective” of each simulation time step. It consists of a target (location), and performance characteristics such as the number of times it is allowed to be done during the simulation, and the type of action required in performing that task.
– State – The state vector storage mechanism of the simulation. The state contains all the information about system state over time and contains a link to its chronological predecessor.
– Event – The basic scheduling element, which consists of a task that is to be performed, the state that data is saved to when performing the task, and information on when the event begins and ends.
– Schedule – Contains an initial state, and the list of subsequent events. The primary output of the framework is a list of final schedules that are possible given the system.
Architecture: The Main Algorithm
Find Next Old Schedule to Add New Events To
Are there too many Schedules?
Is the System in the Schedule
Currently Performing a Task?
Has that Task been Performed in this
Schedule too many Times Already?
Find Next Task to be Completed
Crop Schedules
Have all Tasks been Attempted to be Added?
Has Each Old Schedule Been Attempted?
Has the Simulation End Been Reached?
Can the System Perform this
Task?
Output Resulting List of Schedules
Start at the Simulation Begin Time
Add New Schedule to List
Increment Simulation Current Time
Decision “No”Decision “Yes”
Architecture: The Main Algorithm• Contains the interface between the main scheduling module and the main
system simulation module• Guides the exhaustive search in discrete time steps and keeps track of the
results• Essentially a call to the main system simulation routine inside a series of
nested code loops, with checks to ensure that schedules that are created meet certain criteria from simulation parameters
– Outermost loop is a forward-time progression stepping through each simulation time step
• Avoids recursion, where subsystems “reconsider” their previous actions– Then, it checks to see if it needs to crop the master list of schedules (more on that
next slide)– The inner-most loop attempts to add new tasks onto each current schedule
• Checks that schedule is finished with previous event at current time step• Checks whether the task can be performed again• Checks whether the system can perform this combination of schedule and new task
– The “system simulation” step– Adds state data to state
• If successful, create new event with the new task and state, and add it to the end of a new schedule copied from the current one
Architecture: The Fundamental Modeling Elements
• Four fundamental modeling elements
– Constraint – A restriction placed on values within the state, and the list of subsystems that must execute prior to the Boolean evaluation of satisfaction of the constraint. Also the main functional system simulation block, in that in order to check if a task can be completed, the scheduler checks that each constraint is satisfied, indirectly checking the subsystems.
– Subsystem – The basic simulation element of the framework. A subsystem is a simulation element that creates state data and affects either directly or indirectly the ability to perform tasks.
– Dependency – The limited interface allowed between subsystems. In order to keep modularity, subsystems are only allowed to interact with each other through given interfaces. The dependencies specify what data is passed through, and how it is organized. Dependencies collect similar data types from the passing subsystems, convert them to a data type the receiving subsystem is interested in, and then provide access to that data.
– System – A collection of subsystems, constraints, and dependencies that define the thing or things to be simulated, and the environment in which they operate.
Architecture: The Constraint-Checking Cascade
• Primary algorithm when checking whether a system can perform a task
• Internal constraint process:– Subsystems which contribute
state data to Qualifier are evaluated
– Qualifier evaluates validity of state
– Constraint fails if a subsystem or the qualifier fails
Constraint
Subsystem 1
Subsystem 2
Subsystem N
Qualifier
Constraint Passes
Constraint Fails
Fail
Fail
Fail
Fail
Pass
Pass
Pass
Pass
New Task
State Data
Architecture: The Constraint-Checking Cascade
• Constraint-Checking Cascade:– Constraints are checked in
user-specified order contributing subsystem data to the state while they execute
– The remaining subsystems not needed to evaluate a constraint are then checked
– If any of the checks fail, no event is added to the schedule and the state is discarded
– If all of the checks succeed, the task and state are used in the creation of a new event, which is added to the end of the schedule
• “Fail-fast” constraint methodology
Constraint 1
Constraint 2
Constraint N
Remaining Subsystem
s
New Task
Pass
Pass
Pass
Pass
Possible Schedul
e EVENT
EVENT
State Data
Scheduler
Architecture: Subsystems
• Subsystems are state transition objects– Describe how the subsystem goes from its old state to
new state in performing the task
• Inputs– Old state of the subsystem– Task to be performed– Environment to perform it in– Position of their asset
• CanPerform() is the main execution method– Code describes the governing equations of the
subsystem
Architecture: Dependencies• Dependencies are the interpreters between subsystems• Example: Power subsystem dependent on ADCS subsystem for
power input of solar panels due to solar panel incidence angle to sun vector– ADCS only interested in orientation– Power only interested in how much power other subsystems generated– The dependency function would translate the orientation of the spacecraft
into how much power the solar panels generate• Dependencies structured as they are to avoid “subsystem creep”
– Information about and functions from each subsystem slowly migrate into the other subsystems
– Evolutionary dead-end in simulation frameworks– Against the tenets of object-oriented programming
Architecture: The System State
• State is unique to each event• All the data generated over the
course of the event is stored in its corresponding state
• Storage like a bulletin board– Only changes from previously
recorded values are posted– Most recent saved value of the
variable is also the current value
• Many objects have access to the state, including subsystems, constraints, dependencies, data output classes and schedule evaluation functions
0.5s0.85s
1.5s
Event Start Event End
Wa
tts
300270
500
Time
Power Subsystem
5001.5
2700.85
3000.5
00
WattsTime
Other Subsystems
State
Architecture: Schedule Evaluation and Cropping
• Scheduler attempts to create new schedules by adding each task (in the form of an event) to the end of each schedule in the master list from the previous simulation time step
• Number of possible schedules grows too quickly during a simulation to keep every possible schedule
• When number of schedules exceeds a simulation parameter (maxSched), the scheduler rates them based on a user-defined “value function” and then keeps only a user defined number (schedCropTo) of schedules
• Changes the basic scheduler from exhaustive search to a “semi-greedy” algorithm
Horizon 1.2 Runtime Analysis
Horizon 1.2 Parametric Runtime Analysis
systnDSO 2max
• D – Target deck size• Smax – Maximum number of schedules allowed before cropping• n – Number of time steps in simulation• Tsys – Mean system execution time
Aeolus: The Horizon Framework Baseline Test Case
Horizon Version 1.2
Aeolus Mission Concept• Aeolus: The Greek
god of wind• Extreme-weather
imaging satellite• Circular, 1000km,
35 degree inclined orbit
• Simulation date: August 1st 2008 for 3 revolutions
• Targets clustered into high-risk areas, including Southeast Asia and the Gulf of Mexico
• Sensor has ability to generate data while in eclipse
180 W 150
W 120
W 90
W 60
W 30
W 0
30
E 60
E 90
E 120
E 150
E 180
E
75 S
60 S
45 S
30 S
15 S
0
15 N
30 N
45 N
60 N
75 N
Asset Start Pos. Asset End Pos. Ground Station Imaging Target
Aeolus System Model• Subsystems
– Access – Generates access windows for different types of tasks– Attitude Dynamics and Control System – Orients spacecraft for
imaging– Electro-Optical Sensor – Captures and compresses images when it has
access to an imaging target and sends data to the Solid-State Data Recorder
– Solid-State Data Recorder – Keeps imagery data before being sent down to a ground station
– Communications System – Transmits imagery data when it has access to a ground station
– Power – Collects power usage information from the other subsystems, calculates solar panel power generation and depth of discharge of the batteries
• Constraints– During imaging, no information can be sent to ground stations – The data recorder cannot store more than 70% of its capacity– The depth of discharge of the batteries cannot be more than 25%
Aeolus Simulation Results: Power/Data
0 5000 10000 150000
50
100
Buf
fer
Usa
ge (
%)
Simulation Time (s)
0 5000 10000 150000
1
2
3
4
Dow
nlin
k D
ata
Rat
e (M
b/s)
Simulation Time (s)
0 5000 10000 150000
50
100
Simulation Time (s)
Bat
tery
DO
D (
%)
Event Start
0 5000 10000 150000
50
100
150
Gen
erat
ed S
olar
Pan
el P
ower
(W
)
Simulation Time (s)
Other Test Cases
• Developed– LongView – a micro-class space based
telescope for K-12 and College Educational Use
• Proposed– PolySat 3 and 4 post launch simulation for
received telemetry data trending and analysis
The Horizon Simulation Framework Version 2.0
The Horizon Simulation Framework Version 2.0 Drivers
• Several functional and architectural problems with Version 1.2– Not a natively multi-asset simulation framework– Subsystems had no hierarchical information– StateVar objects subverting C++ type-checking
and opening user to data-corrupting errors– Reading and writing to state was difficult,
single value input and output
Version 2.0 Architecture Changes: SubsystemNodes
• SubsystemNodes solve the problem of subsystems having no hierarchical information• Adjacency list hierarchies like those from the Boost library specify chronological
predecessors• Version 2.0 implements this network structure to confirm that previous subsystems
have already run• SubsystemNodes point to the SubsystemNodes they are dependent on, as well as the
Subsystem they represent• There is an added benefit in that multiple SubsystemNodes can point to the same
Subsystem• No circular dependencies allowed!
SubsystemNode 1
SubsystemNode 10
SubsystemNode 12
SubsystemNode 11
SubsystemNode 7
SubsystemNode 5SubsystemNode 4
Subsystem of interest
Order of Execution:- Sub 10 is called- Sub 10 recurses to Sub 8- Sub 8 recurses to Sub 6- Sub 6 recurses to Sub 1- Sub 1 executes- Sub 6 recurses to Sub 2- Sub 2 executes- Sub 6 executes- Sub 8 executes- Sub 10 recurses to Sub 9- Sub 9 recurses to Sub 3- Sub 3 executes- Sub 9 executes- Sub 10 executes
SubsystemNode 2
SubsystemNode 3SubsystemNode 6
SubsystemNode 8 SubsystemNode 9
Version 2.0 Architecture Changes: Assets
• In order to Task multiple assets, Asset objects were created• Assets defined as: Any actor that had subsystems as members and
knowable motion• Functionally, an asset is an attribute of a subsystemNode due to the
fact that it just specifies involvement in a Task and a Position• Assets can have dependencies on one another by having involved
subsystemNodes have dependencies to one another
SubsystemNode 1
SubsystemNode 6
SubsystemNode 2
SubsystemNode 3
SubsystemNode 9SubsystemNode 8
SubsystemNode 10
SubsystemNode 12
SubsystemNode 11
SubsystemNode 7
SubsystemNode 5SubsystemNode 4
Asset 3
Asset 1
Asset 2
Version 2.0 Architecture Changes: SystemSchedules and
AssetSchedules• In Version 1.2, a schedule was a series of events where system performed a
task• In 2.0, Having multiple assets requires the ability to task each asset
independently• Each asset must then have its own schedule (which is called an
AssetSchedule)– AssetSchedules hold a list of events and an initial state
• The whole system must have a unique schedule (now called a SystemSchedule)
– SystemSchedules hold a list of AssetSchedules
Event Event EventEvent
Event Event
Initial State
Initial State
assetSchedule 1
assetSchedule n
systemSchedule
Simulation Time
Version 2.0 Architecture Changes: Scheduling Changes
• In Version 2.0 it does not make sense to require that each asset be able to perform the same task or that each asset must perform a task at all
• Instead, at each simulation time step in order to create a new systemSchedule at least one of the assets must be able to start a new task
• All the combinations of assets performing each of the tasks are created for each old schedule
• One serious implication - Assets that are not scheduled must extend their previous events further in a “cruise mode”– Called the canExtend() function– Simply checks that given nominal operation of a system in its
current environment, can it continue to pass the system’s constraints and the subsystems requirements
Version 2.0 Architecture Changes: States and Profiles
• Previously, retrieving incorrect data was possible given a relatively common user error
• Something type-safe needed at compile-time to let user know they are asking for the wrong data type
• State now contains vectors of maps of Profiles of different types– Profile is a templated class– All access to the State is done by setting and retrieving Profiles
• 2 main benefits– Profile has mathematical methods to do extremely common tasks,
reducing modeling time significantly (50-90% from tests)– Profiles and StateVarKey (the key used to store and retrieve
values) are both templated• Specifying a Profile return type when looking up a variable that is
incompatible with the StateVarKey type causes a compiler error
Horizon 2.0 Runtime Analysis
Horizon 2.0 Parametric Runtime Order Equation
• Theoretical due to the fact that algorithm didn’t significantly change– Number of tasks possible changes to number of combinations of assets
and tasks• Access pre-generation algorithm (not included in Thesis) is being
currently added before the version freeze– Parametric evaluations done before this addition would be invalid
sysA tnSDO 2
max• D – Target deck size• A – Number of assets• Smax – Maximum number of schedules allowed before cropping• n – Number of time steps in simulation• Tsys – Mean system execution time
The Aeolus Constellation: The Horizon Framework Multi-Asset
Baseline Test CaseHorizon Version 2.0
The Aeolus Constellation: Implementation
• Two assets were created, and the constituent subsystemNodes were duplicated from the same subsystems found in the previous test case
• Constraints and dependencies were changed to use the accessors from the new profile class
• At heart, the system that was modeled is identical, albeit with the number of assets doubled
• The first asset kept the original asset’s orbit• The second asset was initialized using the same orbital
parameters except the RAAN was rotated 180 degrees– Assets get better ground track coverage of targets in one
revolution
The Aeolus Constellation: Results
The Aeolus Constellation: Results
The Aeolus Constellation: Results
The Aeolus Constellation: Results
Horizon Conclusions
Horizon 2.0 Strengths and Weaknesses
• Ultimately, the modeling mantra is still true: “The better the model the better the output”
• It is incumbent on the modeler to create an accurate system to simulate in order to create data provides value to the analyst
• Horizon is capable of producing useful data given relatively simple subsystem models
• Four main points concerning software design and architecture– Highly Variable Degrees of Fidelity (GOOD)– Easy Access to State Data (GOOD)– No GUI (BAD)– Model Creation is Complex (BAD)
Future Plans• Genetic Algorithm for Schedule Generation• Dependency Integration into SubsystemNode• Drag-and-Drop Simulation Creation GUI• Automatic “Sanity-Checking” for User-Specified
Code• Parallelization• LHLV Coordinate System Support• Matrix Templatization• Error Recording• Module Library Creation• “Just Flying Along” Mode