RIT Senior Design Project 10662edge.rit.edu/content/P10662/public/old/Design Review Nov...
Transcript of RIT Senior Design Project 10662edge.rit.edu/content/P10662/public/old/Design Review Nov...
RIT Senior Design Project 10662
D3 Engineering Camera Platform
Design Review November 6, 2009
Time: Friday November 6, 2009 9:00 am to 11:00 am
Location: RIT Campus. Building 9 Room 4435
Project Team
Gregory Hintz
Samuel Skalicky
Jeremy Greene
Jared Burdick
Michelle Bard
Anthony Perrone
Advisors
Bob Kremens (RIT)
Philip Bryan (RIT)
Scott Reardon (D3 Engineering)
Kevin Kearney (D3 Engineering)
2 | P a g e
Table of Contents
1 INTRODUCTION 4
1.1 SUMMARY 4
1.2 SYSTEM MODEL 4
1.3 DETAILED SYSTEM MODEL 5
2 CUSTOMER NEEDS 7
3 ENGINEERING SPECIFICATIONS 8
3.1 SYSTEM ENGINEERING SPECIFICATIONS 8
3.2 SUB- SYSTEM ENGINEERING SPECIFICATIONS 9
4 FPGA BOARD 11
4.1 HARDWARE DESCRIPTION PLAN (SOFTWARE DESIGN) 11
4.1.1 GOAL 11
4.1.2 DESCRIPTION 11
4.1.3 FPGA 11
4.1.4 DSP/OEM 12
4.1.5 ANALYSIS 13
4.2 FPGA SYSTEM SPEED ANALYSIS 14
4.2.1 REASON FOR NOT HAVING ENOUGH ANALYSIS IN THIS AREA 14
4.2.2 ANALYSIS 14
4.2.3 CORRELATION 16
4.2.4 DISCUSSION OF CALCULATIONS 18
4.2.5 CONCLUSION 18
5 THE CONNECTOR BOARD AND EXTERNAL INTERFACES 19
5.1 OVERVIEW 19
5.2 DESIGN DECISIONS 19
5.3 CURRENT DESIGN 20
6 INERTIAL NAVIGATION SYSTEM (INS) 22
6.1 OVERVIEW 22
6.2 DETAILS 22
6.3 SOFTWARE 22
7 CHASSIS INTERFACES 24
7.1 NEEDS 24
7.2 SPECIFICATIONS 24
7.2.1 AIRCRAFT SPECIFICATIONS 24
7.2.2 ELECTRONICS SPECIFICATIONS 25
3 | P a g e
7.3 DESIGN 25
8 VIBRATION DAMPING 26
8.1 NEEDS 26
8.2 CONSIDERATIONS 26
8.2.1 FREQUENCIES OF AIRCRAFT 26
8.2.2 ALLOWABLE VIBRATION IN IMAGE 26
8.2.3 COMPONENT RESONANT FREQUENCIES 27
8.3 APPROACH 28
8.4 CHASSIS DESIGN 29
8.4.1 PHASE 1: INDIVIDUAL COMPARTMENTS 29
8.4.2 PHASE 2: ASSURE COMPONENT SCALE 30
8.4.3 PHASE 3: DETAILED DESIGN TO ALLOW FOR REALISTIC THERMAL, VIBRATIONAL, AND SPATIAL ANALYSIS
31
8.4.4 PHASE 4: FINAL MECHANICAL DESIGN 32
9 ENVIRONMENTAL MANAGEMENT 34
9.1 HEAT 34
9.1.1 MAJOR SOURCES OF HEAT GENERATION INSIDE CHASSIS 34
9.1.2 HEAT TRANSFER MODELS 34
9.2 HEAT TRANSFER ANALYSIS, A RADIATION MODEL 35
9.2.1 ASSUMPTIONS 35
9.2.2 ANALYSIS 35
9.2.3 VARIABLES 35
9.3 HEAT TRANSFER ANALYSIS, A CONDUCTIVE MODEL 36
9.4 HEAT TRANSFER ANALYSIS, A COMBINED MODE APPROACH 37
10 OTHER ENVIRONMENTAL CONSIDERATIONS: CONDENSATION 39
10.1 DEW POINT ANALYSIS 39
11 MOUNTING 41
11.1 INTERNAL MOUNTING 41
11.1.1 ELECTRONICS MOUNTING 42
11.1.2 OPTICS MOUNTING 43
11.2 EXTERNAL MOUNTING 44
12 APPENDIX 45
12.1 CONNECTOR BOARD SCHEMATIC 45
12.1 CAMERALINK® TO D3 CHIP SCHEMATIC 46
4 | P a g e
1 Introduction
1.1 Summary The customer, D3 Engineering, desired that we integrate supplied components into an
environment-ready, flight-capable package that can record and transmit multi-spectral ground images
and associated INS data. This solution should be capable of (if not initially configured for) processing
that data in some way, including, but not limited to, compositing images from multiple spectrums and
“stamping” image data with real time INS data.
1.2 System Model
Figure 1: Black Box model of System
The system will have up to 4 cameras and lens mounted internally to the bottom side of the enclosure
that will input the data into the enclosure. Processed images with the corresponding INS data will be
sent out through the FastEthernet port. CameraLink® and Gigabit Ethernet will also act as inputs for
external cameras.
5 | P a g e
1.3 Detailed System Model
The system is divided into the main parts for the purpose of design and referencing later on. The
Electronics Enclosure is used to house all the electronics. This unit is separate from the camera module
to better allow for expandability and to help control the temperature and environment differences
between the two. The Electronics System is made up of the OEM DSP Board, Novatel OEM Board, FPGA
Board, Connector Board and the external connectors.
The OEM DSP board is a digital signal processing board that the customer has designed. This board
already has the capabilities of basic image processing including compression and resolution
modification, as well as INS integration.
The Noatel OEM Board is customer supplied multi-frequency GNSS Receiver. This board is being using to
control the input GPS signal used to detect the location of the camera module. This is otherwise known
as GPS.
The FPGA Board, otherwise referred to as the Processor Board is where most all of the processing and
routing will take place. The FPGA will be located on this board along with internal memory to control the
signals coming from the cameras and do basic processing of the signals. The FPGA will act as a switch
between all the units of the system and store the necessary data onto the SSD SATA hard drive for
access at a later time.
Figure 2: System Model of customer supplied parts and the basic configuration of the system.
6 | P a g e
The connector board acts as a medium between the FPGA board and the external connectors. Major
power regulation will take place on the board along with a conversion between CameraLink® and D3
protocols to allow for easier processing on the FPGA board.
The Camera Enclosure houses up to 4 Cameras using the D3 camera protocol. For this project we will
only be required to test the system for 2 but the customer would like to be able expand later to 4.
7 | P a g e
2 Customer Needs
1. Use Supplied Components
a. 10MP Visual Camera
b. IR Camera
c. 1 of the 2 Inertial Navigation Systems depending on availability
i. NovAtel OEM Board OEMV3
ii. NovAtel OEM Board OEMV2
d. OEM Camera Processing Board
2. Interface to single 10Mpixel Camera through proprietary “D3 Camera” connector.
3. Interface to single Thermal Camera through Camera Link Interface.
4. Capture 10MP data at 1FPS
5. Capture the Thermal Camera data synchronized with the 10Mpixel camera.
6. Capture INS data and store to match corresponding photos.
7. Accept data from auxiliary external cameras and INS units
8. Make data overlay and processing possible on-board
9. Output data from the supplied OEM Board connection for real-time viewing
10. Store data internally during flight using a SSD SATA drive.
11. Package must include mounting and space necessary for four cameras.
12. Package everything (except for the IR camera) to protect it against the environment and to
minimize the size.
“Everything” Includes:
a. (4) visual cameras and their lenses
b. (1) INS sensor
c. (1) OEM Camera Processing Board
d. Any other components necessary for operation
13. Position images for ground observations
14. Make cameras separable from the processing hardware
15. Interface package to a light passenger aircraft.
8 | P a g e
3 Engineering Specifications
3.1 System Engineering Specifications 1. Constraints
a. The system shall use the supplied 10MP Visual Band Camera at 1FPs.
b. The system shall use the supplied CameraLink Camera at 30FPS.
c. The system shall use the supplied Inertial Navigation System.
d. The system shall use the supplied OEM Camera Processing Board.
2. Interfaces
a. The system shall interface with the customer’s proprietary software.
b. The system shall be powered from an external source.
c. The system shall position the cameras with unobstructed line of sight in a direction
perpendicular to the direction of flight on the bottom side of the airplane.
d. The system shall connect to a programming interface for hardware reconfiguration.
e. The system shall connect to two external cameras and one external INS module.
3. Physical
a. The system shall not exceed 8” x 6” by 7.5” tall.
b. The system shall weigh no more than 15lbs.
c. The cameras enclosure must be able to be removed from the electronics.
4. Environmental
a. The system shall operate in the following environment:
b. The system shall limit EMI emission according to MIL-810G Standard.
5. Configurability
a. The system shall enable configuration of the camera interface
b. The system shall enable configuration of image compression
c. The system shall enable configuration of INS interfaces
6. Capacity
a. The system shall store up to 20 minutes worth of image data.
b. The system shall be able to house 4 cameras at a time.
7. Processing
a. The system shall store the raw data from the cameras and the INS data to allow for
access before the next mission.
b. The system shall process the images and output the data to the SSD Hard drive within
10 seconds of the picture being taken.
c. The system shall transmit the low resolution images within 10 seconds of the pictures
being taken out of the OEM Board’s 10/100 connector.
Temperature -50°C to 45°C
Humidity 90% or less
Altitude 10,000 ft (3048m)
Shock and Vibration Per RTCA DO-160
9 | P a g e
3.2 Sub- System Engineering Specifications
1. Package
a. Given the environmental conditions defined in the System Engineering Specifications, the packaging shall maintain the following internal environment for the electronic components:
Temperature 0°C to 70°C
Humidity < 60%
Shock and Vibration Per RTCA DO-160
b. The packaging shall contain EMI per MIL 810-G c. The packaging shall not degrade the optical performance of the cameras. d. The packaging shall enable replacement of any component within 10 minutes, given a
trained user, without custom tools. e. The packaging shall have the following connectors available externally:
i. 10/100 Connector ii. Gigabit Ethernet Connector (x2)
iii. CameraLink Connector (x2) iv. Power Connector (TBD) v. DB-9 Connector
vi. RCA Video Out f. The packaging without electronics installed shall weigh no more than 10lbs g. The external packaging shall not exceed 16” x 6.5” by 5”. (Length x Width x Height) h. The packaging shall mount fixed t a flat plate.
2. Processor Board
a. The Processor board must be able to be reconfigured for multiple different operations by a technical expert. Different Operations to include but not limited to
i. More than 2 inputs used. ii. Overlay of the Camera inputs with the corresponding INS data.
iii. Change in rate inputs are selected b. The processor board shall not exceed 5” x 6” x Height TBD.
c. The processor board shall weigh no more than 2lbs. d. Interfaces
i. The processor board shall be connected to the connector board by 1. High speed header 2. Header for the power
ii. The processor board shall be connected to the following external connectors directly (not from connector board):
1. Gigabit Ethernet (x2) iii. The processor board shall be powered by GND, +5V, 12V, 3V. iv. The processor board shall be securely mounted to the connector board and the
OEM board to not allow for movement between the boards.
10 | P a g e
v. The Processor Board must be able to communicate with the supplied OEM Camera Processing Board and the onboard storage device simultaneously through high speed connections.
3. Connector Board a. The Connector board shall have the following inputs.
i. High Speed to Processor Board ii. CameraLink from External Connection
iii. Power Input cable from External Connection iv. Power Output to Processor board
b. The connector board will have provide the necessary power for the system by outputting the necessary power based on the following voltages with an input voltage ranging from +9 to +36V DC in order to supply adequate power ranges for devices in the system.
i. GND ii. +5, TBD W
iii. +12V, TBD W iv. +3V, TBD W
c. The connector board must be able to fit inside the electronics enclosure and shall not exceed 5” x 6” x height TBD.
d. The connector board must not exceed 2lbs. 4. Storage Unit
a. The Storage unit must be commercially available solution. b. The Storage unit must be upgradable and able to remove and replaced within 5
minutes, given a trained user, without custom tools. c. The Storage unit must be equal or greater than 250 GB in order to allow enough storage
of required data. (6.a) d. The Storage unit shall weigh no more than 1 lb. e. The Storage unit shall not exceed 4x6x3. f. The Storage unit shall be a solid state drive that will be able to withstand the
environment of the electronics enclosure. g. The Storage unit shall be connected to the Processor board using SATA.
11 | P a g e
4 FPGA Board
4.1 Hardware Description Plan (Software design)
4.1.1 Goal
This system will be able to receive data from multiple sources simultaneously and process that data. It
will then save that data to a storage medium and pass the data along to another processor for
compression and real time viewing.
4.1.2 Description
To accomplish this goal the main system is broken up into two main parts: FPGA and DSP. The FPGA will
accept input from various sources and process it in different ways. The DSP will receive input from the
INS device and also images from the FPGA and do some processing on them.
4.1.3 FPGA
The FPGA will have software to describe
specific components. This software has the
potential to be turned into a physical device
(ASIC) in the future to speed up the design
and reduce costs. The major functions of the
FPGA is to accept input, process this data,
and to export the data. It will receive data
from three different types of devices: D3
imagers, Gigabit Ethernet enabled cameras,
and a DSP co-processor. The data from these
devices will be controlled with a single main
component that will function mainly as the
Central Dispatch. Data will be exported two
ways, through the DSP and to the hard drive.
The D3 imagers will each have a control module to receive the data in multiple cycles (as related
to the external device, not the internal clock). It will initiate a collection cycle when instructed from the
Central Dispatch. Upon receipt of this data in full, it will pass the data off to an intermediary storage
location (DDR Memory). It will notify the Central Dispatch of its data deposit, and reset to wait for the
next signal to begin data collection from the Central Dispatch.
The Gigabit Ethernet camera controllers will function similar to that of the D3 imager
controllers. However they will have a complex component to handle the translation to/creation of
TCP/IP packets and send this data out at gigabit speeds(1000Mbps) using a special Ethernet controller.
Figure 3: Software Model of FPGA
12 | P a g e
The hard drive will communicate over SATA I speeds. This requires a special controller to
convert to serial communication and use specific hardware to keep the data rates high. This component
will receive commands from the Central Dispatch to get data from the memory and store it on the hard
drive.
The image-processing element will be designed as a pipeline to increase efficiency. This will
allow us to replicate the element many times over to either speed up processing or to complete various
levels of processing. This component will receive signals from the Central Dispatch to get data from
memory (DDR) and then process it. Upon completion of processing, it will return this data back to
memory and wait for the next instruction from Central Dispatch.
The last and essential component is the Central Dispatch (CD) that will control the other
components. This element will contain various lists (Queues) of data that either needs to be processed,
to be written to the hard drive or be sent to the DSP (for more processing or compression). It will also, at
regular intervals, prompt the data input components to get data from their devices. These components
will notify the CD of the location of the data acquired and the CD will add this to-do item to its various
lists.
4.1.4 DSP/OEM
The DSP uses some signals similar to that of the D3
imagers, however it also has some high-speed data
lines as well. This OEM controller module will be a
two-way communication platform. It will send images
to the DSP for more processing and return, and for
compression to be sent out the Fast Ethernet
connection to a user for real time viewing. This
module will also serve the main entry point for spatial
orientation data into the FPGA. Upon receipt the INS
data will be stored directly to the hard drive. Image
data may or may not have already been processed in
the FPGA and thus will either be sent to be processed,
or to be stored on the hard drive.
The co-processor for this design is the DSP, which will perform some functions already designed
by the customer as well as perform new ones. Some of these will include image processing, image
compression, INS integration, and real time video out. The DSP functions more like a CPU with discrete
(hard-wired) components that process instructions run on a kernel (core). The DSP will need to have
functions written for communication with the FPGA (including sending info back and forth) and
specifically what type of processing to do and where the data will go next.
Figure 4: Software Model of DSP
13 | P a g e
4.1.5 Analysis
Based on the breakdown above this software package will be implementable in the time allotted for this
project. The customer has acknowledged the large scope of the project and has designated certain levels
of completeness for this area of design. Ideally they would like the entire software design to be
implemented. However in the time allotted and considering the vast amount of other work needed to
be complete on top of this, they have decided to stress the hardware design of this project more. Thus,
the minimum requirement is to pass data through the FPGA to the DSP/OEM Board they have provided.
This implementation will be trivial to implement considering the background of the team (use the FPGA
Speed analysis document as a reference of background). We as a team will at least lay the foundation
for this software by creating VHDL Entities for the various components described above. This will allow
other engineers to come in behind and fill in the holes with little overall knowledge of the project, or
even outsourcing the work to be a trivial task.
Based upon the breakdown, two main groups of elements exist. Both of them can be
implemented in parallel and will even be done so in different development environments using differing
languages. For example, the FPGA software design will be implemented in VHDL; the DSP will be written
using the C programming language.This will allow the team to work efficiently and will allow the work to
be completed within the time limits.
The image processing elements will probably not be implemented due to time constraints and
the availability of algorithms and Ips(intellectual properties) for these components. This will allow
multiple types of processing schemes to be implemented and interchanged based on the need and the
planned use of this design. We will implement them as straight through Entities with no function inside.
This will allow us to simulate processing by delaying the data and will enable us to calculate the types of
processes that can be implemented based on the time available to keep the images streaming in “real
time” at the rates required.
The various IO components will all be based off of one “parent” component that will encompass
the majority of the functions in the camera modules. This will again reduce the total amount of time
needed to complete the design. From this “child” component we will be able to tailor each component
for the specific type of IO or add other elements such as Ethernet controllers to interface between other
types of camera inputs.
Using these plans, the design will be able to be completed in the time allotted. Additionally the
customer will be able to configure this design to their specific needs now, and in the future using the
same hardware. This accomplishes the desire of the customer for adaptability and an acceptable
lifetime. This configurability is inherently built into the design of the FPGA and the system
interconnecting the various devices. The design of the software is influenced by this design in hardware.
14 | P a g e
4.2 FPGA System Speed Analysis
4.2.1 Reason for not having enough analysis in this area
In a normal industrial situation when using an FPGA, all operations are coded, simulated, and
tested prior to choosing an FPGA model. However, due to our time constraints, coding and testing
before designing the circuit board is not feasible. Hence, we are shooting for the best and using the
resources we have to, as best as possible, approximate the needs of our operations on this FPGA.
4.2.2 Analysis
Members of this team have worked with FPGAs before and have done research into specific
applications used on them. One such implementation is that of a Neural Network. Software in the past
has used thousands of resources of computing power to simulation the human brain’s learning
capabilities. Recently the design of an artificial network of cells in the brain is being used. The following
will display this work and explain how the desing of our system will be fast enough to hand the
operational data rates .
Traditionally, the term neural network had been used to refer to a network or circuit of
biological neurons. The modern usage of the term
often refers to artificial neural networks, which are
composed of artificial neurons or nodes. Artificial
neural networks are made up of interconnecting
artificial neurons (programming constructs that mimic
the properties of biological neurons). These networks
may either be used to gain an understanding of their
biological counterpart, or for solving artificial
intelligence problems without necessarily creating a
model of a real biological system. The real, biological
nervous system is highly complex and includes some
features that may seem superfluous based on an
understanding of artificial networks. The cognitive
modeling field involves the physical or mathematical
modeling of the behavior of neural systems; ranging from the individual neural level (e.g. modeling the
spike response curves of neurons to a stimulus), through the neural cluster level (e.g. modeling the
release and effects of dopamine in the basal ganglia) to the complete organism (e.g. behavioral
modeling of the organism's response to stimuli). For more detailed info about neural networks please
see external sources such as Wikipedia or GCCIS Faculty.
This design shown below in Figure 1 was implemented on a Digilent Basys Spartan 3E-100
development board. It currently performs the function of XOR; however has no heuristic coding to help
out. Instead it uses the theory touched on above to learn acceptable and unacceptable responses to
input. This is not a simple design. Results of this network are outputs within 10% of the goal values for
Figure 5: Neural network Model
15 | P a g e
"high" and "low". These values are relatable to digital logic values in hardware and can be used as such.
The results of the implementation in VHDL using the Xilinx WebPack ISE are shown below in table 1.
Table 1: Device Utilization Table for Spartan 3E-100
Table 2: Data Speeds
Node Levels Time Data In 29 13ns Data Out 2 5ns
Looking at the data from Table 1 we can see how little resources this design took up, a little less
than 19% overall. This is not much considering the complexity of the design and the simplicity of the
Spartan 3E-100 FPGA. For example the last row of table 1 shows that only 4 built in hardware optimized
18x18 bit multipliers exist in this device and all are used. For sure this design does a substantial amount
of math to calculate the weights on the connections between neurons and more ALUs will need to be
created using general purpose slices as can be seen in the usage of slices being about 20%.
This design was very fast and was able to process changes in inputs very quickly. We can see that 29
levels of logic were needed to be traversed from the input to the end of the processing pipeline.
However this only takes 13ns, and we can calculate the frequency of this to be 76.92MHz. This number
implies that we can handle 76 million changes of inputs per second on each pin that has input to this
logic design. From this point in the design, to get to the output is only 2 levels and takes 5ns (or a speed
of 200MHz). This is quite speedy on a device that we released to be a low price, slowest device in the
product line in 2005. There are not many people that are still using computers considered "hot items" in
2005 (think first dual core processors, Celeron D ...). If the internal logic was simpler, or more pipelined
we would be able to reduce these speeds in final design.
Total resources used: 19% Device Utilization
Summary
Logic Utilization Used Available Utilization Number of Slice Latches 22 1,920 1%
Occupied Slices 197 960 20% 4 input LUTs 360 1,920 18%
Logic 328 17%
Route-thru 32 3% Number of bonded IOBs 10 108 9%
MULT18X18SIOs 4 4 100%
16 | P a g e
4.2.3 Correlation
Unfortunately we have not been able to simulate this design on a Spartan 6LX75T core. We have
been having problems getting the Xilinx software setup. However we can attempt to build a ratio
between the two devices. For example, the table (table 3) on the next page compares directly, the
resources that both devices have available. As you can see there is a large increase on "on die"
resources available. This does not mean that we can do the same task with fewer resources, just that
the design will take up less space on this model. We must keep in mind that although there are more
resources available, a larger number of resources will be used to route the data throughout the device.
However this model is built using a different process of creating transistors, and thus will be able to run
faster since the length of distance between individual elements is smaller on the same chip (up to a
limit). We can see that the standard clock speed of the Spartan 6 is 2.5 times that of the tested unit. This
will directly correlate to the speed of the device running from input to output. However this is not a 1:1
ratio, we cannot say it is due to the clock speed being higher since will run at 2.5x that of the Spartan3.
But we can say that that speed available with the specific sequential calculations will be higher than that
to some degree above 1 and below 2.5x. There are various factors to consider in this calculation,
including the amount of actual processing that will be done (currently unknown), the clock speed of the
other components in the design (memory, SATA, etc.). However, due to the amount of resources
available and the low-end of the speed spectrum we know (250MHz), we can estimate that the speed
will be closer to the 2x.
For example, say we are processing a pixel and we need to do X amount of math that takes Y
seconds. Let’s say that Y is longer than 1/30th of a seconds (IR camera picture rate), this will cause a
problem that we cannot process pictures fast enough. We will solve this not by making the FPGA faster,
or have a higher clock speed, but instead by parallelizing the math done in X. This will reduce the time Y
needed to process the pixel. Now, we realize that only so much can be done in parallel, and that we will
not be utilizing all of the resources of this large FPGA. We will solve this next problem by parallel
processing of multiple pixels simultaneously. There’s, no reason why we wouldn’t be able to just say
copy the image pipeline above (A) and create another one called B. From this we can say after the pixel
data is received for the next image, and pipeline A is not done yet, we can start the processing in B. This
technique is very scalable, so the amount of processing we do is directly proportional to the number of
parallel pipelines we will need to process all of the data in "real time".
17 | P a g e
Table 3: Comparing Spartan Models
Resource Type Spartan 3E-100 Spartan 6LXT75 % more than 3E-100
Slices 960 11,662 1210%
LUTs 1,920 46,648 2430%
Latch/FFs 1,920 93,296 4860%
User I/O 108 296 2740%
Diff. Pairs 40 148 370%
18x18SIO/DSP48a slices 4 132 3300%
Functional Clock Speed 100MHz 250MHZ 250%
Size of transistors 90nm process 45nm process 50%
Initially we will just use the FPGA as a large and super fast MUX. This will allow us to connect
multiple cameras to the OEM board. The complexity of logic is much less than that of the Neural
Network simulated in Test condition 1. This implies that the Speed from input pin to output pin will be
less (however much so is irrelevant for this analysis, since 76MHz more than meets our needs) and
here's why:
4.2.3.1 Visual Camera
10MP image size = 3664 x 2748 = 10,068,672 pixels
1 pixel = 12 bits of data (width of interface is 16 bits, so this works and gets passed in 1 clock cycle)
Clock cycles / pixel = 1
Number of images / second = 1
Speed = cycles / pixel * pixels * images / second = 1 * 10,068,672 * 1
Total required speed to get data in = 10,068,672Hz (or 10.07MHz)
4.2.3.2 IR Camera
1.3MP image size = 640 x 480 = 307,200 pixels
1 pixel = 8 bits of data (width of interface is 16 bits, so this works and gets passed in 1 clock cycles, max)
Clock cycles / pixel = 1
Number of images / second = 30
18 | P a g e
Speed = cycles / pixel * pixels * images / second = 1 * 307,200 * 30 = 9,216,000
Total required speed to get data in = 9,216,000Hz (or 9.2MHz)
4.2.3.3 INS Unit
Total size of data / capture = unkown
Total size of data / second = 1kB
RS-232 rate of device = unkown (serially so 1 bit / cycle)
# of captures = 30 (same as fastest image rate)
Total data needed to be recieved / second = #bits / #captures = 8000 / 30 = 270 bits = 34 bytes of data
Speed = 8000 bits / second = 8000 baud
**Note: Due to the fact that the INS will use the RS-232 standard, rates are calculated in Baud, or gross bit rate
expressed in bits/second.
4.2.4 Discussion of Calculations
The calculations above show that the max speed for any type of camera connected to this
system will be about 10MHz. From this we can ultimately say that yes, the image data will be able to be
received in “real time” without causing any slow downs in the system. Now, you may be thinking if we
use 6 cameras and each camera runs at 10MHz well that 60MHz to capture all the data. Well, you’re
forgetting that the FPGA can do more than 1 thing at once. We can design capture components for each
camera, individually running at 76MHz. Now, for sure all of this data will have to be funneled into the
same place (DDR, SSD, OEM board). However once the data is inside the FPGA things run much faster as
shown from the calculation of speed from the FPGA to the external pins at a speed of 200MHz. This
speed is over and above what we would need for 6 cameras (60MHz).
4.2.5 Conclusion
From the calculations above, the FPGA is able to handle getting the data in. Using the strategies
above we can appropriately parallel process to get all of the data in and processed successfully in the
required time. The internal speeds of the FPGA allow plenty of time to organize the data and send it out
to the various devices.
19 | P a g e
5 The Connector Board and External Interfaces
5.1 Overview The Connector Board will provide the electrical hardware interface for several of the system’s key
requirements, namely the Inertial Measurement Unit (IMU) connector, the two Camera Link camera
connectors and the primary system power supply connector. To facilitate the Camera Link cameras,
circuitry will be included to convert from the Camera Link data format to the D3 Imager interface.
Additionally, three of the six voltage levels needed by the system will be derived from the power supply
using voltage regulator and monitoring circuitry.
The system specification provided by the customer instructed that two Gigabit Ethernet (GigE)
connectors, one 10/100 Ethernet connector and one RCA output connector should be included, in
addition to those already listed.
5.2 Design Decisions Initially, some confusion was had over where each of the respective connectors would be mounted, with
the original specification implying that all external connectors would be mounted on the Connector
Board directly, although it was not clear how practical or necessary such an arrangement would be.
After some deliberation, the decision was reached that the GigE, 10/100 Ethernet and RCA interfaces
need not be associated with the Connector Board and that only the Camera Link connector should be
mounted on the Connector Board itself. The reasoning for each decision follows:
The hardware driver for the two GigE connectors will be implemented in the FPGA, which was
selected with this specific need in mind, with a facilitating IC chip on the FPGA Board. Since the
hardware interface for GigE is relatively complicated, minimizing the number of transitions from
board to wire, etc., is desirable. Therefore, the GigE connectors will be mounted on the FPGA
Board. The GigE connectors will appear side-by-side, beneath the Camera Link connectors.
The 10/100 Ethernet and the RCA interfaces will be handled by the customer provided OEM
Board, which has integrated support and connectors for both. Passing these interfaces from the
OEM board to the FPGA Board and then to the Connector Board would be wasteful of board
space and unnecessarily gratuitous. Therefore, a direct link from the OEM Board to panel
mounted 10/100 Ethernet and RCA connectors will be used.
The IMU connector will be panel mounted in consideration of the large size of the connector
and the limited space on the Connector Board – a DB-17 connector with integrated coaxial lines
was found to be the most readily available option that satisfied both the needs for RS-232
support and for a coaxial data line. The various data lines will connect to a board header on the
Connector Board with a direct link to the FPGA Board. Since the IMU interface uses the RS-232
standard to receive and respond to commands, a Null Modem configuration will be
implemented on the Connector Board, to allow for proper communication.
The two Camera Link connectors will be mounted directly onto the Connector Board, in part
because they will require non-trivial format conversion circuitry to operate with the FPGA and
OEM Board, but also because both will fit conveniently given the system size limitations. The
20 | P a g e
Camera Link to D3 Imager format conversion circuitry will be placed on the Connector Board to
save space on the FPGA Board; to limit the complexity of design, as the Camera Link uses
differential signaling, which would complicate transfer from the Connector Board to the FPGA
Board; and to minimize the number of data lines that must be transferred from the Connector
Board to the FPGA Board, as the D3 Imager format will require fewer data lines than the Camera
Link would.
The power supply connector will be panel mounted with a direct connection to the Connector
Board via a board header. To prevent interference, which the circuitry on the FPGA Board may
produce, the incoming power (9 to 36 Volts) will be switched to 12V, 5V and 3.3V on the
Connector Board. These three voltage lines will be linked to the FPGA Board, where they will be
further dropped to 2.5V, 1.8V and 1.2V. In addition to concerns over interference, the voltage
regulators and monitors for some of the voltage lines will be placed on the Connector Board to
save space on the FPGA Board.
5.3 Current Design After determining the features and needs of the Connector Board, the block diagram in Figure 6 was
developed to illustrate the design in an easily digestible form.
The block diagram was followed by development of a schematic for the Connector Board (Appendix A,
Figure A1), which was itself derived from a circuit provided by the customer (Appendix A, Figure A2),
which converts from the Camera Link format to the D3 Imager format. The circuitry for the power
regulator is discussed in greater detail in its respective portion of this document.
In addition to the schematic developed, after all major Connector Board elements were determined
(integrated circuits, connectors, etc.), an accurately sized “scarecrow” diagram was drawn to provide a
realistic estimate of the minimum board size necessary for the Connector Board (Figure 7).
Figure 6: Block diagram of the Connector Board.
21 | P a g e
Figure 7: Accurately sized "scarecrow" diagram of the Connector Board.
22 | P a g e
6 Inertial Navigation System (INS)
6.1 Overview An INS combines location and orientation data retrieved from a Global Navigation Satellite System
(GNSS) and an Inertial Measurement Unit (IMU). The best known GNSS is the U.S.’s Global Positioning
System (GPS), although the Russian GLONASS system is also operational and several other systems are in
development. An IMU is a local device that determines what direction the device is facing and at what
speed it is moving.
6.2 Details The original customer specification called for a complete INS to be implemented, although cost and
availability concerns have scaled the requirement back to supporting just the GNSS, with plans to
include an IMU if one can be acquired that satisfies our needs at a cost acceptable to the customer.
The customer has also specified that the NovAtel OEMV brand of GNSS receivers should be used, with
the most likely choice coming down to either the OEMV-2 (Figure 3) or the OEMV-3 (Figure 4), although
the final decision is ongoing. While both support data transfer using an RS-232 serial communications
bus, in conjunction with a coaxial data line, the power requirements are for each are not the same: The
OEMV-2 requires a 3.3 +5%/-3% VDC power supply, whereas the OEMV-3 requires a 4.5 to 18 VDC power
supply. If we do not have a final selection prior to finalizing our designs, both possibilities will need to
be accounted for.
6.3 Software Software interaction with the OEMV board will be performed using a routine in the customer supplied
OEM Board Digital Signal Processor (DSP). Communication with the OEMV involves sending commands
over the RS-232 serial communications bus in either ASCII (plain text, verbose), abbreviated ASCII (plain
text, non-verbose) or Binary (ones and zeros) format. Responses are sent from the OEMV board in like
format, with some data being sent over a coaxial data line.
While detailed software specifications have yet to be written, a flowchart has been developed to
illustrate the fundamental routine for interfacing with the OEMV board (Figure 5).
Figure 8: OEMV-2 GNSS receiver board.
23 | P a g e
Figure 9: OEMV-3 GNSS receiver board.
Figure 10: Basic routine for interfacing with the OEMV GNSS board.
24 | P a g e
7 Chassis Interfaces
7.1 Needs The chassis must meet two distinct sets of interface criteria. Firstly, it must interface with the electronics
it was designed to enclose. It must house them internally as well as provide for their interface with
external equipment. The customer also requires that the cameras be separable from the processing
electronics. Secondly, it must interface with two airframes: those of a conventional single-propeller
passenger plane as well as that of the RIT UAV Airframe “C” design. The former represents the flight
platform that will actually be used by the customer. The latter represents a “loose” constraint; the RIT
airframe is used merely as a means to obtain a tighter size restriction. The customer desires that the
module be “compact”, so Airframe “C” is used as the design spec for “compact”.
7.2 Specifications
7.2.1 Aircraft Specifications
The specifications required to meet the above stated needs are summarized in the table below.
Table 4: Differences between Aircraft and UAV to consider when designing the chassis
Small Passenger Aircraft RIT U.A.V. Airframe “C”
Must be mountable to a flat plate Must be mountable to a flat wooden base
Smaller that a person;
Approx 2’ x 2’ x 5’6” tall
Less than 16” x 6.5” x 5” tall
Less than 150lbs (68kg) Less than 15lbs (6.8kg)
Though all efforts will be made to conform to the requirements of Airframe “C”, failure to comply will
not render the device unsuccessful.
25 | P a g e
7.2.2 Electronics Specifications
The chassis must contain the following electronic connectors on its outside surface:
– 2 Gigabit Ethernet – 1 10/100 Ethernet – 2 CameraLink – 1 DB-9 w/ Integral Coaxial – 1 RCA Video – 1 3-Pin Amphenol Power – 1 Indicator LED – 1 USB
The chassis must house the following components:
– 4 D3 Cameras with lenses* – 1 D3 OEM Image Processor Board – 1 FPGA-Based Controller Board – 1 Custom built connector board – 1 NovAtel OEMV-3 GPS Board – 1 MicroStrain 3DM-series IMU – 1 2.5” Solid-state hard drive – Interconnecting wiring for the above
*Lenses are Linos Mevis-C 16mm
7.3 Design The finalized design of the chassis can be seen in section 8.4.4. It encloses all of the components
mentioned in section 7.2.2, dividing them into two sub-sections. The optics sub-section encompasses
the cameras, lenses, and the IMU device, and serves as the base of the device. This larger enclosure
determines the footprint of the device. The electronics sub-section encloses the remainder of the
capture and processing hardware, and sits atop the optics section during normal use. The sub-sections
are separable, and both capable of being mounted safely when separated.
Overall, the system measures 10.25” long x 6” wide x 6.5” tall and weighs 10.9 pounds. These
specifications meet the requirements of the RIT UAV Airframe “C” payload in every measure but height.
Because the height exceeds the allowable measure by a full 1.5”, it is not practical to attempt to meet
the requirement by optimizing this configuration.
26 | P a g e
8 Vibration Damping
8.1 Needs The device needs to maintain structural integrity as well as take clear pictures while being subjected to
normal aircraft vibrations.
8.2 Considerations
8.2.1 Frequencies of Aircraft
The vibration character of the aircraft that the module will be mounted in is taken to be the vibration
spectrum defined in RTCA DO-160, Section 8. Test category “S” is used, representing a standard, fixed-
wing aircraft during normal operation. The device is assumed to be mounted in Aircraft Zone 2
(“Instrument Panel, Console & Equipment Rack”).
The vibration character is a sinusoidal wave, defined by varying frequencies and peak-to-peak
amplitudes. The vibration spectrum is as follows:
Frequency Amplitude
5 – 15 Hz 0.1 in
15 – 55 Hz 0.01 in
55 – 500 Hz Linear Range: 0.01 in @ 55 Hz, 0.0002 in @ 500 Hz
8.2.2 Allowable Vibration in Image
The clarity of images is quantified by a quality colloquially known as “smear”, measured in terms of the
number of pixels worth of distance that the aircraft moves when the shutter is open. Smear is a function
of aircraft speed and altitude, the image angle of the lenses in use, and the aircraft shutter speed. Smear
is desired to be less than half a pixel, and smear must be less than one pixel.
The following flight parameters represent normal aircraft operation:
Max. Aircraft Speed: 70 knots
Altitude Range: 1000 – 5000 ft.
Lens Focal Length: 25 mm
Lens Image Angle: 38.1°
An additional speed was accounted for due to the vibration. This speed was calculated as the derivative
of the equation of motion of vibration. Thus, the known position equation: yields the
equation of speed: . Thus, the maximum speed due to vibration is FA. For calculation
purposes, 0.1 in. and 500 Hz were used, which translate to a maximum speed of 1.27 m/s
27 | P a g e
Calculated smear due to aircraft speed was found to be 0.66 pixels at 1000ft and 0.13 pixels at 5000ft.
Smear reduces to a desired limit at 1500ft, achieving a smear of 0.44 pixels. Smear due to vibration
induced speeds was found to be an order of magnitude less than that due to aircraft speed. At 1000ft,
smear is 0.023 pixels, reducing to 0.0047 pixels at 5000ft.
8.2.3 Component Resonant Frequencies
Most parts are small and rigid, causing resonant frequencies to be higher than will likely be experienced.
This category is not to be forgotten, as more detailed data may reveal otherwise, but it is very low on
the scale of actual risk of damaging the system.
28 | P a g e
8.3 Approach Prior to the analysis of Sec. 8.2.2, it was thought that some form of mechanical isolation would
be required in order to eliminate image distortion. However, it has come to light that the
vibration character of the aircraft will not significantly distort image quality.
Figure 11: Solid Works model of System
In light of new developments, this design is no longer necessary. Not only are isolating mounts
no longer necessary, but it is possible that they will amplify the vibration the chassis actually
experiences. At present, the design will call for a flat flange to mount directly to a flooring
surface.
Rubber Damping Mounts Optical Enclosure
Electronics Enclosure
29 | P a g e
8.4 Chassis Design
8.4.1 Phase 1: Individual Compartments
Figure 12: Showing the first phase of design of Individual components
30 | P a g e
8.4.2 Phase 2: Assure Component Scale
Figure 13: Solid Works model showing Dummy Solids of Major Electrical components
Above: Dummy solids of major electrical components fit snugly in a 5.5” x 5.5” x 3” space. Below, four
customer-specified lenses fit well into an enclosure of similar cross section.
Figure 14: Solid Works model showing Camera Lenses
31 | P a g e
8.4.3 Phase 3: Detailed design to allow for realistic thermal, vibrational, and spatial
analysis
Figure 15: Solid works Model showing System design
- Vibrational dampers mount on center flange
- Interial grooves allow for component mounting to be modular, changeable, and secure
- Stock extruded enclosure reduces build time
- “Stacked” configuration maintains thermal separation at a minimal “footprint”
32 | P a g e
8.4.4 Phase 4: Final Mechanical Design
- Vibration damping omitted due to recent analysis
- Custom machined to minimize size
- Separates Optics from Electronics, minimizes footprint
- Internals mounted on sub-frame, renders primary enclosure adaptable to new configurations
and different hardware.
33 | P a g e
34 | P a g e
9 Environmental Management
9.1 Heat
9.1.1 Major sources of heat generation inside chassis
Hard drive
o about the half the heat produced comes from this
Voltage Regulator
FPGA
DSP
9.1.2 Heat Transfer models
o All models are for steady state
Radiation
o Model as black body
From electronics to chassis
From chassis to external environment
Conduction
o From electronics into chassis
Heat travels through ground planes on boards
May route heat through standoffs
o From Chassis to external environment
Through chassis material into external environment
Convection
o Negligible
Minimal (if any) moving air
35 | P a g e
9.2 Heat Transfer analysis, a radiation model
9.2.1 Assumptions
• Treat enclosure as a black box radiating heat to the outside air
• Neglect Convection • Protected from moving air
• Neglect Conduction
• Only connected to airplane by small vibration dampers
• Temperature at surface of chassis = temperature inside of chassis
• All Power consumed by electronics is output as heat radiating out
• = .89 • Heat radiating from chassis is 50% of heat radiating from boards (qc = .5qb)
9.2.2 Analysis
- Black box radiation :
Combined, this gives:
Re-arranged and solved for Tboards:
9.2.3 Variables
T chassis – temperature of chassis ( K)
T Ambient – temperature of environment
outside of chassis ( K) Tambient= (Tground ( c)- altitude
(m)*6.5/1000)+273)
q chassis – heat radiating from chassis (W)
q boards – heat radiating from electronics
inside chassis (W)
A chassis – surface area of chassis (m2)
A boards – surface area of electronics (m2)
- Stefan–Boltzmann constant: 5.67 x 10-8
(Wm-2K-4)
- emissivity of the chassis
Board stack
Chassis
wall q chassis
q board
-T chassis
-T boards
-T ambient
-T chassis
36 | P a g e
From spreadsheet solution of the above equations:
From this, if we have much more than 15
watts of heat generated by the
electronics, the electronics will overheat.
Having only radiation is a worst case
scenario. Looking at another model may
prove worthwhile.
9.3 Heat Transfer analysis, a conductive model Assume:
Ts1 = ambient temperature inside the chassis Ts2 =ambient temperature outside the chassis Treat Egen as if the energy is being generated in wall dx= thickness of wall
According to conservation of energy: Ein+Egen-Eout=Estored (when
we assume steady state, Estored=0) In our case:
- Egen is all the heat generated by the electronics which is equal to the power required by them -Ein is all the heat entering the chassis wall -Eout is all the heat exiting the chassis wall
Mathematically defining the terms: Egen = I2R
Ein=-KAdTS1/dx Eout=- KAdTS2/dx
Which gives: -KAdT/dx+ I2R-(- KAdT/dx)=0 I2R = -KA(TS2-TS1/dx) Solving for TS1: Ts1 = (I2R/KA)dx+Ts2
Variables: I2R : power supplied to electronics Ts1: (temp on inside of chassis wall) Ts2: (temp on outside of chassis wall) K: thermal conductivity of material A: total surface area of chassis dx: thickness of chassis walls
from spreadsheet solution of the above equations:
This model is not all
encompassing either.
Looking at a more detailed
conduction and radiation
model may be valuable.
Board dimensions
chassis dimensions
atmosheric conditions heat transfer
Aboards (m) a total (m2) t air Pgen (w) Tboards Final (°C)
0.03677412 0.11320526 218 50 154.9198923
0.03677412 0.11320526 241.15 50 158.4558416
0.03677412 0.11320526 218 100 231.5786436
0.03677412 0.11320526 241.15 100 233.7486141
0.03677412 0.11320526 218 15 55.41633105
0.03677412 0.11320526 241.15 15 63.06223622
box dimensions (cm)
atmosheric conditions heat transfer
a total (cm
2) s (m) t air
k,AL (W/mK)
Pgen (w) Ts1, box ( k) Ts1, box ( C)
0.1132 0.012 218 30 50 218.1766702 -54.8233298
0.1132 0.012 318 30 50 318.1766702 45.17667024
0.1132 0.012 218 30 100 218.3533405 -54.6466595
0.1132 0.012 318 30 100 318.3533405 45.35334047
0.1132 0.012 218 30 15 218.0530011 -54.9469989
0.1132 0.012 318 30 15 318.0530011 45.05300107
electroni
cs
Chassi
s wall
E out
Egen
T s2
-T
ambient
Ts1
Ei
n
37 | P a g e
9.4 Heat Transfer analysis, a combined mode approach
Modeling as a thermal circuit where:
Tb-Ta = q Req And Req = Reach element The conductive resistances are:
Rc1=X1/K1A1
X1 = length of standoffs K1=conductivity of standoffs A1= Surface area of standoffs
Rc2=X2/K2A2
X2 = depth of mounting plate K2=conductivity of mounting plate A2= Surface area of mounting plate
Rc3=X2/K3A3
X3 = depth of wall K3=conductivity of wall A3= Surface area of wall
The radiation resistances are: Rr1= 1/hr1Ar1
Rr2=1/hr2Ar2
Ar1= Surface area of boards Ar2= Surface area of chassis
hr1= (Tb+Tin) (T2b+T2
in)
hr2= (Tb+Ta) (T2
b+T2a)
Assuming: Tin= .7Tb Twall= Tb
=.89 = 5.67 x 10-8 (Wm-2K-4)
*I’m not quite sure how to model radiation without knowing Tb, since this is the value I’m trying to find…
electronics
Chassis
wall
T a
Tb
Board
mounting
plate
standoffs
radiation radiation
38 | P a g e
The equivalent resistance looks like this:
Using the model: Tb = qReq+Ta We get: Tb = q[((X1/K1A1)/4)+ (Rr1) + (X2/K2A2)+(
X3/K3A3)+(Rr2) )]+Ta Though I’m not entirely sure how to model it, I expect the solution to be between those of the first two models…
Sources:
Fundamentals of heat and mass transfer by Incorpora et al
Heat Transfer: a practical approach by Yunis A. Cengel
For lapse rate in the troposphere: www.uwsp.edu and http://en.wikipedia.org/wiki/Tropopause
Emissivity coefficients: http://www.engineeringtoolbox.com/emissivity-coefficients-d_447.html
Radiation
between wall
and external
environment
Conduction
between
board mount
and chassis
wall
Tb Ta
Standoffs: 4
conductive
resistances
in parallel
Radiation
between
electronics
and board
mount
conduction
between
chassis wall
and external
environment
Radiation
between wall
and external
environment
Conduction
between
board mount
and chassis
wall
Standoffs: 4
conductive
resistances
in parallel
Radiation
between
electronics
and board
mount
conduction
between
chassis wall
and external
environment
39 | P a g e
10 Other Environmental Considerations: Condensation
10.1 Dew Point analysis
Dew point, the temperature at which water will condense on a surface, is a
function of ambient temperature and relative humidity. Knowing the dew point will tell
whether additional steps should be taken to control temperature and/or humidity inside the
chassis.
Dew point temperature is given as:
Variables:
Td - Dew point ( C)
T - Ambient temperature ( C)
RH - Relative humidity (%)
m - Temperature range dependant constant (non-dimensional)
Tn - Temperature range dependant constant ( C)
From spreadsheet solution of the above equation (and info I currently have):
Condensation may be a problem.
There are two main options, including a heater to keep temperature inside the chassis
above the dew point and reducing humidity inside the chassis to lower the dew point inside the
chassis (a common method for doing this is to use silica gel)
constants
temp range Tn ( C) m
0 to 50 243.12 17.62
-40 to 0 272.62 22.46
RH T air (°c) dew point (°c)
50 -32.6096 -38.95817
50 47.021848 34.011322
1 -33.978152 -70.29261
40 | P a g e
Comparison of methods:
Heater system silica gel pack
weight rank with weight rank with weight
effective at reducing/preventing condensation
5 2 10 2 10
simplicity in manufacturing/implementation
3 -1 -3 1 3
allows for flexibility as heat requirements change
4 1 4 2 8
allows for air/water tight enclosure
2 2 4 2 4
total: 21 31
From this comparison: a compact silica gel pack of an appropriate size appears to be the best
choice. Source for dew point info: www.sensirion.com sensor company explaining how to use info from sensors to
calculate dew point
41 | P a g e
11 Mounting
11.1 Internal Mounting Weighing different techniques against each other:
Considerations heat management techniques weight central
conductive
mounting backbone
with
weig
ht
mount each piece to
chassis seperately
with
weig
ht
separate optics and
electronics packages
with
weig
ht
single package to
house all components
with
weig
ht
effective at removing heat
5 1 5 1 5 1 5 1 5
effective at retaining heat
5 1 5 1 5 1 5 1 5
simplicity in manufacturing
3 2 6 -2 -6 -1 -3 1 3
allows for flexability as heat requirements change
4 1 4 1 4 1 4 0 0
meets temperature needs of specific components
5 0 0 0 0 2 10 -1 -5
allows for air/water tight enclosure
2 1 2 1 4 1 2 1 4
total: 22 12 23 12
42 | P a g e
11.1.1 Electronics mounting
Considerations:
Relations of components to one another Board to board mounting Cable connections External connections/locations
Customer’s desire for reconfigure-ability of hardware components Need for easy access to FPGA for reprogramming Need for easy access of hard drive
Manufacturability Beginning concepts:
Beginning refinement:
-Handle on top for ease of access
-top and bottom plates fit into pre-cut
grooves in chassis walls
-four stands connect top and bottom plates
-electronics connect to top and bottom plates
Side view Top view, gray
portion represents
top/bottom plates
Cantilever shelf mounted
mounted on frame of bars
43 | P a g e
Further refinement:
11.1.2 Optics mounting
Refined concepts
Hole in center of plate
for cables Room for 4 cameras and additional
space to mount other items (IMU?)
Side supports fit into
inner grooves in chassis
Main board stack
Main board stack and ssd
mounted to frame that slides
into enclosure via. Grooves cut
into inner surface
44 | P a g e
11.2 External Mounting Design is still incomplete.
45 | P a g e
12 Appendix
12.1 Connector Board Schematic
Figure A1: Schematic for the Connector Board. The voltage regulator and monitor, represented by the lowermost symbol in the bottom right-hand corner, are discussed in greater detail in the Power section of this document.
46 | P a g e
12.1 CameraLink® to D3 Chip Schematic
Figure A2: Customer provided circuit to convert from the Camera Link format to the D3 Imager format used by the FPGA and the OEM Boards.