Projects > Media Annotation > a-LIP
a-LIP annotates images based on 600 trained concepts
Automatic Linguistic Indexing of Pictures (a-LIP) is a system that can automatically annotate pictures from among 600 trained concepts. In other words, the system looks at different features of an image such as color, texture, and shapes to gather information about the content of the image. This information is then compared to trained concepts such as: ocean, sky, building, and people. Finally, the system is able to annotate, or attach one or more concepts, to the image based on extent of association between the image and the concept.
Annotating images allows a person to search for images based on keywords as opposed to the filename of the image. This is very beneficial because many times people do not name files according to the content of the picture. If the filename is related to the image it could be misleading. For example, I may have a picture of a rose named "rose.jpg" and someone else may name a picture of their Aunt Rose "aunt_rose.jpg". In a normal search for a picture of a rose flower, both of these images would apprear, but only the first would be correct. Annotating the images allows you to search on multiple keywords for a more specific and accurate search.
Currently, people must annotate images individually or look at each image and key in each keyword. With the increase in digital images over the past years this can be very time consuming. Images inputted into our system are annotated automatically saving time. The search capabilities that this project presents will have great impacts especially in education and the military. Students, researchers, and professors will be able to quickly and more accuratly search for images. The military also receives countless images from satellites and video surveillance cameras. It could be very time consuming for a person, or groups of people, to go through these images to flag certain ones as dangerous - time that the military does not always have. With computer annotation, the military would be able to only sort through those images that the computer flagged as dangerous. More information on this technology in regards to satellitte imagery can be found here.
This project is being conducted by Jia Li and James Z. Wang at the Pennsylvania State University. The work was started when they were with Stanford University.
The ALIP system selects among 600 trained concepts to annotate images automatically. Annotation results are shown here. An on-line real-time image annotation demonstration is expected to be developed and made available in the future. At that time, you will be able to submit your own images for automatic annotation. For now, we provide some automatic annotation examples. For more technical information, please refer to our publications.
Please Note: Images on this web site are for viewing ONLY. The copyrights belong to the original publishers. Please do NOT download from our site without our permission. We will terminate your access if such event is detected. If you would like to compare results, please contact James Wang via email to arrange.
Random
The link above will show you examples randomly selected from 60,000 images.
Below are some examples of this computer annotation with the top five categories shown. Words in bold are those top picks, selected by the computer based on their statistical significances, for annotation. The computer is selected keywords from a dictionary of 600 automatically learned concepts. More informaiton on the project can be found at the main media annotation site.
Demonstration on its way!