Camera Networks tailored to the IoT A novel architecture ...
Transcript of Camera Networks tailored to the IoT A novel architecture ...
A novel architecture of a Smart Camera Networks tailored to the IoT
Luca Maggiani(*), Gian Marco Iodice(**), Andrea Gassani(**), Claudio Salvadori(*), Andrea Azzarà(*), Roberto Saletti(**), Paolo Pagano(*)
(*) CNIT – National Laboratory of Photonic Networks andTeCIP Institute – Scuola Superiore Sant'Anna, Pisa
(**) Department of Information Engineering - University of Pisa
Workshop on Architecture of Smart Camera, Sevilla, June 2013
Outline
● Introduction: IoT vs. Smart Camera Network (SCN)
● Architecture of SCN node
● The triangle recognition pipeline
● Implementation and results
The IoT paradigm for massive processing resources
Components:● Stand-alone Computer Vision
algorithm deployed on the Smart Camera Node (FPGA + uC).
● Smart Camera Node addressed using IoT paradigm.
Goals:● Capability to address a Computer
Vision resource using Internet Protocol through a generic browser.
Motivation:Massive processing resources (i.e., CV applications)
abstracted on IP-oriented network.
Architecture of a SCN node (1/3)
FPGA + microcontroller architecture:● FPGA+SoftCore: image capture and heavy processing● uC: network and middleware handling
Architecture of a SCN node (2/3)
The FPGA provides:1. Camera interface2. Hardware image pre-processing (streaming paradigm)3. Software feature extraction and object recognition (on a
SoftCore)
Architecture of a SCN node (3/3)
The FPGA extracts image features and sends aggregated data to the microcontroller, through a bus
The microcontroller implements the resource abstraction on the network and handles
configuration settings.
RS232
IoT abstraction
P. Pagano, C. Salvadori, S. Madeo, M. Petracca, S. Bocchino, D. Alessandrelli, A. Azzarà, M. Ghibaudi, G. Pellerano, and R. Pelliccia, "A Middleware of Things for supporting distributed vision applications", in Proceedings of the 1st Workshop on Smart Cameras for Robotic
Applications (SCaBot), Vilamoura, Algarve, Portugal, October, 2012.
The Internet of Things
● The Internet of Things (IoT) is the future extension of Internet which will include a huge number of embedded system
● In the IoT vision systems of objects will be discovered and addressed as resources in the network
● An enormous amount of data about the physical world will be accessible
IoT Protocol Stack
● Nodes are constrained in terms in terms of memory, computing power, and network bandwidth.
● Special protocols are needed in order to adapt to these limitations
IEEE802.15.4
6LoWPAN
CoAP
UDP
IoT Protocol Stack
● IEEE802.15.4: specifies the physical layer and media access control for low-rate wireless personal area networks (low-cost, low-speed, ubiquitous)○ 10-meter range with a transfer rate of 250 kbit/s
● 6LoWPAN: is an adaptation layer for IPv6 allowing to transmit IPv6 packets over IEEE 802.15.4 networks
● CoAP: an HTTP-like protocol allowing to create embedded web services. Extends the web architecture, based on the REST paradigm, to the IoT○ Resource abstraction○ GET / POST / PUT / DELETE○ Observe mechanism
Service and Control Room
● VCR - Interface with the user (Web based)● SNM - Handles communication with the Smart Camera
Nodes
Services
● Sensor network monitoring● Automatic resource discovery/registration ● Data storage● Event notification
Implementation
● VCR as a Web Application● VCR and SNM hosted on the same server● SQL Database
○ holds all the information: hosts, resources, messages, subscriptions etc.
CoAP Observe● The client can retrieve a representation of the resource
and keep this representation updated over a period of time
● Example: Observe resource shape on host A● Every time the resource shape changes the CoAP node
sends a notification
CoAP Observe● The client can retrieve a representation of the resource
and keep this representation updated over a period of time
● Example: Observe resource shape on host A● Every time the resource shape changes the CoAP node
sends a notification
Observe A:shape A
CoAP Observe● The client can retrieve a representation of the resource
and keep this representation updated over a period of time
● Example: Observe resource shape on host A● Every time the resource shape changes the CoAP node
sends a notification
Update shapeA
Computer vision pipeline
Hardware based elaboration
Software based elaboration (SoftCore)
Probabilistic shape
recognition
Lineintercept
Videocapture
Lineextraction
Hough Transform (HT) overview
1. The HT is a computer vision methods to detect geometric features, such as lines and curves, into a raw frame
2. The standard HT maps each pixel to several points into the Hough space, representing all the possible line that could pass through that point
Hough Transform
HT: transformation from pixel space to Hough space
Drawbacks of HT
There is no a priori knowledge on the pixels about their information content:
○ It is unknown if a pixel belongs a "real line"■ Every raw pixels have to be processed
○ Any "real line" slope information is unknown ■ Every line of the bundle have to be processed
Realisation consequences:● Processing based on heavy nested cycles● Massive memory access
In order to optimise the HT, we propose to use as input an
"gradient image" derived from an edge detector
HT evolution: HT Gradient Based (HTGB)
Gradient extraction HTGB
HW SW
HTGB advantages
● comparable results with respect to HT
● reduction of both the amount and the complexity of the
input data:○ only the pixel with a the gradient module over a certain
threshold are considered
■ iterations reduction
○ the gradient direction gives an indication of the line slopes ■ ease the line retrieval
Results: HT vs HTGB
Standard HT
Hardware Gradient extraction
HTGB
Standard HT
HTGBIncreased
performance
Processing time comparison
(m, ϑ)
Gradient extraction
1. Horizontal and vertical kernel2. Arctangent to compute the slope ϑ3. Euclidean norm to compute the
magnitude m
Line intercept: from line to a shape
HTGB
Each line intercept represents a corner and every corner can be seen as a triangle vertex
The algorithm detects a triangle when:1. it finds a couple of vertices generated
by a set of three lines2. the set of lines define a closed
geometric figure
The SoftCore
Altera NIOS II CPU:● Clock: 100MHz● 6-stage pipeline● 4K+4K Instruction/Data Cache● 32MB SDRAM● Hardware divider/multiplier● FPU co-processor
Goals (1/2)
SmartCamera1IPv6 address: 2001::a:a:ff:fe00:1 Smart Camera Network
○ Every node connected to the SCN is addressable using IPv6
○ The SCN node made available to the network the CoAP-resource of the triangle coordinates (when the triangle is detected)
○ The network can dynamically discover a new node and add it to the Virtual Control Room
SmartCamera2IPv6 address: 2001::a:a:ff:fe00:22
Goals (2/2)
It is possible to access the Virtual Control Room using a standard web browser.
● CoAP service runs on a PC and receives events from the border router
● The resources are shown as a simple web-site
CV pipeline performance
about 1 fps@ Q-VGA resolution
Hardwarepre-processing
SoftCoreelaboration
Streaming paradigm: data are processed when appear in input
● constant delay latency of two clock cycle● works at the same fps as input
○ the maximum manageable frame rate depends on technological constraints
○ contingent case: 12 fps (~83ms)
Conclusions
● We propose a Smart Camera Network based on the IoT paradigm:
○ every node can perform massive computer vision algorithms and be an IPv6 Internet peer at the same time
● The 6LoWPAN/CoAP suite of protocols permits to abstract every extracted feature as a network resource:
○ the resource can be dynamically discovered and/or periodically observed;
○ it is possible to handle, store, publish, and render information over the Internet (e.g. through web-style portals).
Questions...
Thank you!Contact:
Luca [email protected]
Claudio [email protected]
CNIT – National Laboratory of Photonic Networks and TeCIP Institute - Scuola Superiore Sant'Anna - Pisa, Italy