Continuous Traffic Detection

From Mindworks
Jump to navigation Jump to search
Team New Perspective
Image Sensor
Sponsors
  • NIATT (National Institute for Advanced Transportation Technology)
  • VSRG (VLSI Sensors Research Group)
  • Dr. Suat Utku Ay, University of Idaho
Team Name Team New Perspective
Duration Fall 2013 - Spring 2014
Faculty Advisor
  • Dr. Touraj Assefi
  • Dr. Suat Ay
Mentors
  • Kyle Swenson (Computer Engineering)
  • Ismail Cevik (Electrical Engineering)

In response to the need for real-time traffic control, less congestion, and reduced emissions, Team New Perspective has developed a unique vehicle speed detection system using a revolutionary low-power image sensor. The design is realized in hardware for future, single-chip implementations.

Background

None of the video detection systems currently in use in traffic signal system operations provide continuous traffic detection and monitoring needed for several advanced traffic control applications. Continuous monitoring of vehicle’s speed, location, and acceleration in the trapezoidal field of view (FOV) of the camera is done at the image sensor level intelligently without requiring complicated digital post processing of video streams and the high communication bandwidth associated with it. The proposed trapezoid image sensor IC will also reduce the overall cost and complexity of the system while providing intelligent and continuous incoming traffic information. These will open doors for several dynamic traffic signal control applications reducing vehicle emissions and fuel consumption due to unnecessary stopping.

Problem Statement

Modern traffic places much higher demands on monitoring and control devices than in the past. An intelligent image sensor and corresponding control system that is designed specifically for extracting only the dynamic properties of incoming traffic and the vehicles with high resolution, high speed, and embedded intelligence on a sensor level is needed.

Project Specifics

To detect traffic flow without the privacy issues that many current systems have, we have decided to use a method we have termed the “tripwire” method. Certain rows in the camera sensor pixel array will be monitored by the FPGA. When an object significantly different than the background passes across this “tripwire”, the row will “fire” and we will know that the leading edge of an object was at the tripwire. When it hits subsequent tripwires, we will know the same thing. If we know the length between the tripwires, gleaning speed information is easy. We are using the method of background subtraction to detect vehicles. This involves generating a background frame, reading the pixel array for a current frame, and subtracting the current frame from the background frame (resulting in a foreground frame) to see moving objects.

Delta Cap Method

We invented the delta cap method to help with the problem of foreground pixels being incorporated into the background too quickly. In a nutshell, when a pixel comes in from the SSLAR2, we compare it to our background and if it is a darker or lighter than the background, we let it change the background pixel value by a set parameter. This parameter we have termed 'Delta cap'.

2014 TNP DeltaCap.png

Trip Wire Method

One of the key concerns with current methods of dynamic traffic control is privacy. To get around this issue, as well as high data rate issues, we decided to only look at certain rows of pixels we have termed "tripwires". When a car drives into this row, it is detected as a moving object. If the distance between the tripwires is known, the speed of the car can be calculated fairly easily by taking the time between two consecutive tripwires "firing", and dividing it by the distance between them.

2014 TNP TripWire.png

System Overview

The following diagrams details the hardware interface for each part of our system. From right to left, the SSLAR2 image sensor, our control system implemented on an FPGA, the FTDI/USB chip, and the GUI/rest of the outside world.

2014 TNP SigInterface.PNG

The Image Sensor

The trapezoidal image sensor is shown below. Notice how the pixel density (pixels per square meter) stays relatively similar as you get farther from the camera. Notice also how the sensor mirrors the shape of the road. This allows for much better resolution at increased distances from the camera, while also allowing for much less image processing because we don't have to cut out the road from the image the camera sees.

2014 TNP Trapezoidal image sensor chip.PNG

The Control System

Below is an I/O flow diagram of our system. This system will be implemented on an FPGA with the end goal of making a single-chip Control System/Image Sensor duo.

2014 TNP ControlSystem.jpg

Command Interpreter

The data flow diagram for the command interpreter is shown below. This module takes in all data from both the SSLAR2 camera and the outside world and pipes them to the correct internal module to be processed.

2014 TNP ControlModule.png

Image Processor

The Image Processor is the core block in our system. It performs all background subtraction and image processing algorithms, as well as generating the foreground, background, and current frames for the system.

2014 TNP ImgProcessor.png

The Request Modules

The following figure is a generalized module representing the four request modules in our system. Of these four modules, two (the Request Background Frame and the Request Foreground Frame) modules will access block memory, and two (the Request Trip Wire Register Status, and Request Operational Register Status) modules will access distributed memory. This is because multiple other modules will need to access them.

2014 TNP RqModules.png

Operational Register Write Modules

The Operational Register Write Module takes commands sent through the FTDI driver to the Command Interpreter that control vital camera functions such as gain, contrast, etc.

2014 TNP WriteOpReg.png

Trip Wire Register Write Modules

The Trip Wire Register Write Module takes commands sent through the FTDI driver to the Command Interpreter that set the start and end addresses of trip wires, as well as the distance from a fixed reference point. This information is used to discern whether or not a trip wire has been "tripped", as well as detect a vehicles speed.

2014 TNP TwReg.png

Testbed

The testbed will be used to test our design. As can be seen, there will be a stepper motor pulling a car with a counter weight attached. The car will pass through photogates that will be placed in such a way as to match up with the tripwires in our design. We can then check and see if the reading on the photogates matches the reading from our camera.

2014 TNP TestBed.png

Project Sponsors

Picture Sponsor Link
2014 TNP DrAy116x89.jpg
Suat Utku Ay, Ph.D. Dr. Ay's Homepage
National Institute for Advanced Transportation Technology (NIATT) Tranlive Program NIATT
VLSI Sensors Research Group (VSRG) VSRG


The Team

Faculty Mentor

Touraj Assefi, Ph.D. Dr. Assefi's Homepage

Student Mentors

2014 TNP Kyle Swenson.png
Kyle Swenson:

Kyle grew up in Boise, Idaho and graduated from Boise High School in 2009. He has always had a strong interest in science, math and computers, which later matured into a passion for programming and digital systems engineering. Kyle loves working with FPGAs and microcontrollers and designing complex systems. After graduate school, Kyle plans on implementing embedded control systems with sensor networks. He intends to start his own business in embedded system contracting after working in the industry for a few years. Kyle has always been interested in learning more about himself and the world in which he lives, in an effort to grow as an individual and make the world a better place.

CompE

Project Members

2014 TNP JG resized.jpg
Jacob Grinestaff:

Jacob is a Senior in the Department of Electrical Engineering at the University of Idaho and will be graduating in May 2014. He is specializing in Microelectronics and will be working at Micron in the DRAM R&D Product Engineering group. He loves being outdoors and has participated in Intramural Frisbee, Soccer, Flag Football, Basketball, and Floor Hockey while at the U of I. His LinkedIn Profile is available here.

EE
2014 TNP Mitch bodmer resized.jpg
Mitch Bodmer:

Mitch is a senior computer engineering student at the University of Idaho. He has experience in microcontroller programming and embedded system design as well as digital and analog circuit design. Mitch has also designed and built avionics for 2 pico-satelites at NASA Ames Research Center. In his spare time Mitch enjoys playing music, cooking, and side projects in electronics.

CompE
2014 TNP Frank Szeibert.PNG
Frank Szeibert:

Francis ('Frank') is a senior in Electrical Engineering at the University of Idaho, and has specialized in controls and analog IC design. He has interned at Micron in the past where he worked in the fab designing and implementing ground-breaking process technologies for DRAM and flash memory. Frank enjoys school, running, and 'Sauced' (a local fry shop in Moscow).

EE
2014 TNP carson.jpg
Carson Stauffer:

Carson is a senior in Computer Science at the University of Idaho. He is pretty awesome.

CS
2014 TNP Paul Bailey.png
Paul Bailey:

Paul is a senior at the University of Idaho where he is earning three degrees in the areas of Computer Science, Mathematics, and Physics. He currently works at the University of Idaho as a web developer. Paul also spends a significant amount of his time creating computer programming and mathematics video tutorials.

CS

Glossary of Terms

Term Definition
Rows Pixels that run from one side of the street to another perpendicular to the flow of traffic
Columns Pixels that run up and down the street parallel to the flow of traffic
Background A frame recording the unmoving parts of the scene
Current Frame The frame being used to extract foreground objects
Pixel Fill The portion of a pixels shade that can be attributed to a foreground object.
Delta Cap The maximum amount of change than can be applied to the background due to a new frame
Delta Breach When a current frame pixel is more than one Delta Cap away from the corresponding background pixel
Foreground Threshold The minimum difference between background and current frame pixels to register a foreground pixel
Tripwire A row of pixels that is being monitored for foreground pixels

Problem Specifications

Frame Rate

  • 100fps from camera chip to FPGA
  • 10fps data output to traffic controller

FPGA – Spartan 3E

Array Resolution – 640x480 (VGA) Data Rate

  • Entire frame: 30.72 MB/s (burst x3)
  • With Tripwires: 480MBx#Tripwiresx100fps

(burst x3) Field of View: 500m

Goals and Deadlines

Fall Semester 2013

Oct. 15: Snapshot #1

  • Basic idea of FPGA design
  • Geometry of Image Sensor
  • Error Calculations
  • Feasibility Study

Oct. 18: Wikipage meeting

Oct. 22: Tapeout

Dec. 6: Snapshot #2

  • Have working demonstration of image subtraction algorithms in open CV
  • Have FPGA design done
  • Have all FPGA block diagrams done
  • Have as much HDL written as possible
  • Have test bed design done

Dec. 13: Portfolio, Wikipage, Logbook DUE

Spring Semester 2014

February 14th, 2014 - Detailed Design Review with Client

February 20th, 2014 - Design Review #2

  • Simulations of all blocks completed and verified
  • Majority of Documentation Done
  • Wiki page up to date
  • Implemented on Camera if possible

March 3rd, 2014 - Logbooks Due

March 4th, 2014 - Expo Registration and Information

March 11th, 2014 - Snapshot #3

April 8th, 2014 - Report Writing Workshop

May 2nd, 2014 - Engineering Design Expo

May 9th, 2014 - Final Report, Website, Logbooks

Project Documents

Team Documents Page