Technology Readiness Tracker

From Mindworks
Jump to navigation Jump to search
Technology Readiness Tracker
A screenshot of the application
Sponsors Dr. Greg Donohoe
Team Name Odin
Duration Fall 2016 - Spring 2017
Faculty Advisor Dr. Greg Donohoe
Students Christopher Campbell

Brandon Ratcliff

Robert Stewart

The goal of this project is to create a tool that can be used for conducting research on the maturity of a technology development. This tool will provide ways to search for and analyze trends in technology to provide a better picture about the current state of a given technology allowing allowing a company to make better decision about whether to invest in this technology, or to pursue it as a product line.

Problem Definition

Background

The idea for this project originally came from a method of classifying technologies created by NASA called Technology Readiness Levels. These levels track the progression of a technology all the way from its initial conception, to it being actively and successfully used. This is useful because it provides an indication as to how mature a technology is, and if has developed to the point where it is reliable enough to be used in a project.

The original goal of this project was to automate the mapping of a given technology to its readiness level, and predict when it would advance to the next levels. Predicting this would have a multitude of uses, for example, it could give an indication of whether a newly developed technology would be worthwhile to license. Since then, the goals of our project have shifted to what is described below.

Specifications

Goals Requirements
Find trends in datasets For a data set, identify trends or quantitative data that can be extracted from it that would be useful for understanding the current state of a technology.
Search for a given technology in a dataset For a given technology, narrow down a data set to include only the things that relate to that technology.
Find related technologies For a given technology, find any keywords or additional technologies that could be useful to understand the given technology.
Visualize data From the resulting searches and trends, create visualizations that provide insight into a given technology.
User interface Create a reactive user interface that provides reasonably quick and easy way to query and explore a given technology.

Implementation

Data Sets

To meet our specifications, we plan to focus on US Patent Data, as it contains a large amount of information useful to understanding the state of a technology, and is easily accessible through various data sources.

As this project is largely data based, finding adequate sources is extremely important. When evaluating potential data sources, we looked at the following criteria:

  1. Accessibility: How easy is this data source to access? Do we need specialized tools? Can it be queried without downloading the entirety of the data set? Is it quick to query?
  2. Completeness: How complete is this data source? Is it up to date? Does it enough information to make it a worthwhile data source?
  3. Compatibility: How easy would this data source be to combine data from this source with other types of data?

Patent data

Data Source Accessibility Completeness Compatibility
USPTO Linked Data Very easy to access, although it requires specialized tools to access. This data can be queried without downloading it, using a SQL-like language. Contains only the metadata (no full text), and it is no longer being maintained as of 2015. If we could find data sets for our other types of data, then it would be very easy to combine them. It's unlikely that these data sets exist though.
USPTO Data Dumps Not accessible online, we would have to download it, and convert it into a database format we could query Very complete, contains all the metadata and full text for every patent since 1790. We would have to manually deal with updating our local copy though. Very compatible, as we could preprocess the data into whatever format we want before storing it.
USPTO PAIR API Accessible online using a query language that doesn't look like it's as powerful as SQL, but still very capable. It's very easy to access using standard code libraries. It's very quick to get summary results, although full results require a large download. Fairly complete, contains most available metadata, and is constantly updated. It's missing a few key data points, such as the full text, and filing company. Fairly compatible. We'd have to transform the data before combining it with another data source, but the way the data is returned makes this reasonably easy to do.
USPTO Patent Full-Text Database (PatFT) No easy access, must be done using web scraping. Only allows retrieving the data from one patent at a time, but is fairly fast for each request. Very complete. This is the only data source that allows retrieving the full text of a patent without a massive download. Additionally, it contains some metadata fields that other sources are missing. Reasonably compatible. Allows retrieving data for any patent given its patent number so once the data is extracted, it's pretty easy to merge in with other data.

Based on the above analysis, we have decided to use the USPTO PAIR API for the majority of our data. It is the only data source that is feasible to use to get data on a large number of patents, and the speed at which it can return data is a huge plus. In addition to this, we will use the PatFT database to selectively supplement the data returned by the PAIR API for individual patents.

Additionally, we are using the Google Geocoding API, and the OpenStreetMap API as mapping data source for one of our visualizations.

Technology Choices

  • Python- a general purpose programming language that we picked to use as the main programing language for our project. We decided this because:
    • All of the team members in our project have experience with Python
    • Python has many great data processing libraries that will be useful for achieving our first requirement (Find trends in dataset).
    • Python code is quick to write, so we can focus on the meat of our project.
  • Web Application (HTML/CSS/Javascript)- Two of our major requirements are Graphing Data, and a User Interface. We decided to develop a web application in order to fulfill these.
    • Interactive visualizations are far easier to create as a web application, then as a desktop application.
    • Web applications are much easier for an end user to use- All they need to do is navigate to a specific page in a web browser.
    • There are many useful Javascript libraries for visualization and user interfaces

Libraries Used

Library Description Rationale
Django Rest Framework A web development framework written in Python used to create APIs. This is the most popular web development framework for Python, and our team has some experience using it.
Django Channels A extension to Django that allows doing work in the background, asynchronously from normal web requests. This library allows us to create a more responsive user interface by collecting and processing data in the background.
Pandas A data analysis library written in Python. This is a very popular library with tons of support and documentation. It is extremely powerful and should be sufficiently advanced and flexible for any data analysis we decide to do.
NLTK (Natural Language Toolkit) A python library for natural language processing. This is the most popular python library for processing language. It contains several useful functions to help pick out the most relevant keywords from a string of text.
Vue.js A web application framework. This is one of the more popular frameworks for web development. It helps with creating a user interface, and managing the underlying data of the application. Vue.js is very modular and extendable, so it's possible to just use a subset of it's features starting out, and only use more of them as needed. This makes it much easier to use then some of the other popular web application frameworks.
Highcharts A Javascript visualization and charting library This is a very powerful library that can be used to create all sorts of interactive graphics. It is very flexible and easy to use, and we have some experience with it as a team.
Heatmap.js A Javascript visualization library that helps create heat maps on top of a world map. This is a very easy to use library that works with the format of data we are generating.

System Architecture

Backend

Our backend has three major layers: the communication layer, the data pipeline layer, and the data analysis and retrieval layer.

Communication layer

The communication layer’s main job is managing the flow of information between the front end and the backend. The process starts when the frontend opens a new websocket (a type of two-way communication channel) between itself and the backend, and sends the backend a search term. The communication layer then receives this request, and splits it up into several jobs which it then passes into the data pipeline layer. Some of these jobs are dependent upon the completion of previous jobs, so the communication layer is responsible for making sure that each job is run in the correct order, and paralyzing jobs whenever possible. For example, the one of the first jobs that is run retrieves the full list of patents for each query. After the completion of that job, two other jobs to retrieve the location of each of each of the patents, and to retrieve the full text of each patent are started in parallel. While each job is running, the communication layer sends the frontend any data that is generated, and any status updates or errors generated.

Data pipeline layer

The data pipeline layer is responsible for managing and storing the data in the backend. Each job started by the communication layer, and many parts of the data analysis/retrieval layer requires information from the data pipeline. For each of these requests, the data pipeline first checks to see if it has the data already. If it does, it returns the stored data, if not it passes the request down to the data analysis/retrieval layer, takes the response and stores it in the database, and then returns it to the requester. Having this central layer for data allows us to cache data between different queries or data analysis steps.

Data analysis and retrieval layer

Lastly, the data analysis/retrieval layer contains all of the code necessary to communicate with our data sources, and perform any data or manipulation functions.

Frontend

Our frontend has two major layers: the communication/data layer, and the visualization layer.

Communication/data layer

The communication/data layer is responsible for opening a new websocket for every search query, and receiving data from the backend. For each query, it keeps track of the current status as reported by the backend, and the data for each visualization type. Every time an update is received from the server, it is processed, and any affected visualizations are notified.

Visualization layer

The visualization layer consists of several self contained components-- one for each visualization we have. These receive their data from the communication/data layer and are responsible for drawing the visualization, or interacting with any data visualization libraries. While this architecture could be simpler (and it was much simpler when we first started our project), this separation allows us to create a much more interactive and efficient end product. Much of the complexities arise from the fact that retrieving data from the data sources can take a very long time, so we need be able to cache whatever data we can, and we need to be able to send multiple progress updates to the client throughout the data collection process. The end result is a better user experience as we can have visualizations that are quick to display and continually improve as more data is retrieved and process, and similar or identical queries can share data whenever possible.

Data Visualizations

Patents filed per year line chart

TechnologyReadinessTrackerLineChart.png

Our first visualization is a line chart showing how many patents matching the provided query were filed per year. This can be used to get a general overview of the activity in a particular technology area. It can make it clear if a technology is still very active and under lots of development, or if a technology has already reached it's peak amount of development activity and is becoming more mature.


Patent filing locations heat map

TechnologyReadinessTrackerHeatmap.png

Our next visualization the location of the individuals or companies filing patents on a heat map. The brighter an area is, the more patents have been filed for a given technology in that location. This can be used to get an overview of what areas, are working on a technology. It can show if most the activity for a technology is happening in the United States, or if more activity is going on in other countries.


Top patenting companies tree map

TechnologyReadinessTrackerTreeMap.png

This visualization shows the companies or individuals who have filed the most patents in a given technology. In addition to showing the most active companies, it can let users know if development in a technology is dominated by big companies, or if there is no one company that is doing a vast amount of the research.


Related Keywords

TechnologyReadinessTrackerRelatedKeywords.png

The related keywords shows the top keywords that appear in patents related to a given technology. This can give the user an idea of other technology areas to look at, or ways to make their initial search more specific. It also makes it clear if a large percentage of patents related to a given technology are focused on one area in particular.


Files

Design Report: File:TechnologyReadinessTrackerDesignReport.pdf

Expo Presentation: File:TechnologyReadinessTrackerExpoPresentation.pdf

Expo Poster: File:TechnologyReadinessTrackerExpoPoster.pdf


Team Members

Picture Bio Major
ChristopherCampbell.jpg Christopher Campbell

Hometown: Coeur D'alene, Idaho

Interests: My professional interests include Data Science and Process Engineering. When I am not focused on school I enjoy reading, snowboarding, and lounging on beaches.

Computer Science
BrandonRatcliff.jpg Brandon Ratcliff

I am a senior at the University of Idaho. I've been interested in programing for as long as I can remember, so computer science was an easy choice. Besides that, I enjoy rock climbing, mountaineering, and reading.

Computer Science
RobertStewart.jpg Robert Stewart

I am a senior computer science student at the University of Idaho and I plan to graduate in the Spring of 2017. I have had five internships including ones at Apple and Microsoft. In my free time, I enjoy running, hiking, basketball, ultimate frisbee, and spending time with friends.

Computer Science