This AI Is Scouting For Obesity-Prone Persons From Space, A…
Neural networks are very popular in the field of artificial intelligence applications like object detection or image classification. Now researchers have discovered a way to find out obesity-prevalent areas with the aid of convolutional neural networks.
Researchers from the University of Washington, Seattle, have been taking satellite images of the earth and using them to find which people are prone to obesity. The satellite records the surrounds of people living or working in a specific area which is later used to analyse obesity-prone people by gauging habits they might have or are likely to develop based on their surrounding environments. Though this analysis is 64.8 percent reliability, it is quite inexpensive and swift.
For this study, the researchers downloaded 150,000 satellite images from Google maps of 1695 census tracts for six cities, Los Angeles, Memphis, San Antonio, Seattle, Tacoma and Bellevue. They used neural networks to train with these images. By observing the trees, roads and other elements around, the neural network gave the estimate of the rate of obesity in the area. The system did not figure out the chances of obesity of a person, but in general of the people living in a region.
Obesity is generally linked to traits like physical activity, diet, genetics and the surrounding environment. It is defined as a BMI which is the weight in kilograms divided by the height in meters squared, greater than 30.
The analysis was done broadly in two steps:
Image extraction: Extracting high-dimensional images from the satellite and identifying the important features by applying convolutional neural networks to them. Also, acquiring the processed point of interest (POI).
An application programming interface of Google called the Google Static Map API is used to embed the Google map images. It has properties like graphics centre, image dimensions and zooming for each image. The zoom level was set to 18, image dimensions were set to 400 x 400 pixels. For each city, the geographical span was divided into small square grids, where the grid spacing for each grid was 150 metres. Each point on the grid expressed a value of latitude and longitude.
After this stage, each image was associated with its respective census tract, using census tract shapefiles. Images that were outside the cities of interest were excluded. The next step was to download the POI data from Google Places of Interest API. To do this, the grid was chosen to select geographical locations and a radial nearby search was performed. 96 different categories of POI were collected and for each census track the number of locations for each category of POI was noted down.
Correlation hunt: Elastic net regression was used to build a model to access the relationship between obesity prevalence and the environment.
Here, a machine learning method called transfer learning was adopted by the researchers. This method requires using a pre-trained neural network to extract important features from the satellite images. It is used for using the pre-trained CNN as a fixed feature extractor along with linear classifiers and regression models.
The network used in this case was a VGG-CNN-F network. It has five convolutional and 3 fully connected layers and this network was trained using 1.2 million satellite images from the ImageNet database. It was then used to assign the objects in the images to each of the categories. This network learns to extract gradients, edges and patterns that help in object detection.
Outputs for each image were collected from one of the fully connected layers. This layer had 4096 nodes and each of these has a non-linear connection, with all other nodes in the previous and next layers. Each feature vector has 4096 dimensions corresponding to the output of these nodes. Mean of all the features belonging to a census tract was found out. To know whether their CNN could differentiate between the built environment features, they made a forward pass through the network for a randomly selected set of images and observed the output from the convolutional layers. The image features were also distinguished based on the categories of obesity prevailing and not prevailing.
The CNN model could learn to identify features of the environment associated with obesity. The success rate of this method on an average for all the six cities was 64.8 percent, although the highest was in Memphis being 73.3 percent.
A new and convenient way has been discovered to track obesity based on the region of stay. This information can also be fruitful for city planners as well. Drawbacks of this method can be defeated with time and a more precise obesity prevalent study can be made possible.