Department of Chemistry, University of North Carolina, Chapel Hill, NC, 27599-3290, USA

Dipartimento di Ingegneria dell'Innovazione, Università del Salento, 73100 Lecce, Italy

Abstract

The Bio-inspired (Bi-i) Cellular Vision System is a computing platform consisting of sensing, array sensing-processing, and digital signal processing. The platform is based on the Cellular Neural/Nonlinear Network (CNN) paradigm. This article presents the implementation of a novel CNN-based segmentation algorithm onto the Bi-i system. Each part of the algorithm, along with the corresponding implementation on the hardware platform, is carefully described through the article. The experimental results, carried out for

1. Introduction

Due to the recent advances in communication technologies, the interest in video contents has increased significantly, and it has become more and more important to automatically analyze and understand video contents using computer vision techniques. In this regard, segmentation is essentially the first step toward many image analysis and computer vision problems

Referring to the development of segmentation algorithms running on hardware platforms, in this article the attention is focused on the implementation of algorithms running on the Cellular Neural/Nonlinear Network (CNN) Universal Machine

The article is organized as follows. Section 2 briefly revises the basic notions on the CNN model and the Bi-i cellular vision architecture. Then the segmentation algorithm is described in detail (see the block diagram in Figure

Block diagram of the overall segmentation algorithm

**Block diagram of the overall segmentation algorithm**.

2. Cellular Neural/Nonlinear Networks and Bio-Inspired Cellular Vision System

Cellular Neural/Nonlinear Networks represent an information processing system described by nonlinear ordinary differential equations (ODEs). These networks, which are composed of a large number of locally connected analog processing elements (called cells), are described by the following set of ODEs

where _{ij }
_{ij }
_{ij }
_{ij }
_{ij,kl }
_{ij,kl }

Since the cells cooperate in order to solve a given computational task, CNNs have provided in recent years an ideal framework for programmable analog array computing, where the instructions are represented by the templates. This is in fact the basic idea underlying the CNN Universal Machine

Recently, a Bio-inspired (Bi-i) Cellular Vision System has been introduced, which combines Analogic Cellular Engine (ACE16k) and DSP type microprocessors

The main hardware building blocks of the Bi-i cellular vision system described in

**The main hardware building blocks of the Bi-i cellular vision system described in **

Referring to the Analogic Cellular Engine ACE16k, note that a full description can be found in

Two tools can be used in order to program the Bi-i Vision System, i.e., the analogic macro code (AMC) and the software development kit (SDK). In particular, by using the AMC language, the Bi-i Vision System can be programmed for simple analogic routines

Finally, note that through the article, the attention is focused on the way the proposed segmentation algorithm is implemented onto the Bi-i Cellular Vision System. Namely, each step of the algorithm has been conceived with the aim of fully exploiting the Bi-i capabilities, i.e., the processing based on the ACE16k chip as well as the processing based on the DSP.

3. Motion detection

This section illustrates the

Then, according to Step 2 in Equation 3, positive and negative

Finally, according to Step 3, the

**(a) ****(b) **its corresponding mask **(c) ****(d) **its corresponding mask

**(a) ****(b) **its corresponding mask **(c) ****(d) **its corresponding mask

4. Edge detection

The proposed

4.1. Preliminary edge detection

The aim of this phase is to locate the edge candidates. The dual window operator is based on a criterion able to localize the mean point within the transition area between two uniform luminance areas ^{R }
^{R }
^{r }
^{r }
**
D
**(

In other words, by applying the algorithm (4) to the sample itself and to the four neighboring samples, preliminary edge detection is achieved. In order to effectively implement (4) onto the Bi-i, the first step is the computation of **
D
**(

**(a) **matrix ** D**(

**(a) **matrix ** D**(

Going to Step 2, the **D**
**
D
**(

4.2. Final edge detection

The aim of this phase is to better select the previously detected edges. Referring to the previous section, note that the zeros of **
D
**(

In order to effectively implement the algorithm (5) onto the Bi-i, at first the matrix **
D
**(

The matrix **
S
**(

Final edge detection **(a) **the matrix ** S**(

**Final edge detection for Foreman.**

Final edge detection **(a) **the matrix ** S**(

**Final edge detection for Car-phone.**

Then, according to the algorithm (5), we need to implement the

where the bias is used as a threshold level (herein, thres **
G
**(

The output of the **
S
**(

Finally, by using the **G**

5. Object detection

The proposed

First, the following

This template is applied to the inverted image of

Behaviour of the **(a) **output after about 15 μs; **(b) **output after about 30 μs; **(c) **output after about 45 μs; **(d) **output after about 60 μs

**Behaviour of the hole-filler template for Foreman.**

In order to implement the second step, the logic XOR is applied between the output of the

Object detection algorithm for **(a) **detected changes **(b) **dilated image

**Object detection algorithm for Foreman.**

According to Step 3, the

According to Step 5, we need to detect the remaining objects in

where the image

Behaviour of the **(a) **output after about 50 μs; **(b) **output after about 85 μs; **(c) **output after about 170 μs; **(d) **output after about 650 μs

**Behaviour of the recall template for Foreman.**

However, differently from Figure

Now, by applying the recall template (11) using the image in Figure

Object detection algorithm for **(a) **group of objects **(b) **new objects after the first iteration

**Object detection algorithm for Foreman.**

6. Discussion

We discuss the results of our approach by making comparisons with previous CNN-based methods illustrated in

**(a) **segmentation by our method; **(b) **segmentation by the method in **(c) **early segmentation in

**(a) **segmentation by our method; **(b) **segmentation by the method in **(c) **early segmentation in

**(a) **segmentation by our method; **(b) **segmentation by the method in **(c) **early segmentation in

**(a) **segmentation by our method; **(b) **segmentation by the method in **(c) **early segmentation in

Now an estimation of the processing time achievable by the proposed approach is given in Table

Execution times for the proposed segmentation algorithm

Motion detection

469 μs → (ACE16k)

Edge detection

34874 μs → 6096 μs (ACE16k) + 28778 μs (DSP)

Object detection

2424 μs → (ACE16K)

Total

37767 μs

Note that the computational load is mainly due to the DSP in the

Finally, we would point out that, while we have conducted this research, a novel Bio-inspired architecture called Eye-RIS vision system has been introduced

7. Conclusion

This article has presented the implementation of a novel CNN-based segmentation algorithm onto a Bio-inspired hardware platform, called Bi-i Cellular Vision System

8. Competing interests

**The authors declare that they have no competing interests**.

List of abbreviations

AMC: Analogic Macro Code; Bi-i: Bio-inspired; CNN: Cellular Neural/Nonlinear Network; IPL: image processing library; LP: low pass; LAM: local analog memory; LLM: local logic memory; MD: motion detection; ODEs: ordinary differential equations; SDK: software development kit.

Appendix

The software development kit (SDK) is a set of C++ libraries to be used for Bi-i programming. Some parts of the SDK are based on classes defined in the BaseData module of the InstantVision™ libraries. The SDK is designed to be used together with Code Composer Studio from Texas Instruments (

The TACE_IPL is an image processing library (IPL) for ACE16k. It contains two function groups for processing images: morphological operations and gray scale operations. The constructor of this class initializes the needed instruction group and writes corresponding IPL templates to the ACE16k.

Note that all the details about the SDK, the InstantVision™ libraries and the TACE_IPL can be found at:

Alternatively, the Bi-i programming guide (which includes the SDK and the TACE_IPL) can be requested at: giuseppe.grassi@unisalento.it